2025-04-01 18:49:20.892640 | Job console starting... 2025-04-01 18:49:20.905035 | Updating repositories 2025-04-01 18:49:20.967325 | Preparing job workspace 2025-04-01 18:49:22.556579 | Running Ansible setup... 2025-04-01 18:49:27.379652 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-01 18:49:28.093359 | 2025-04-01 18:49:28.093512 | PLAY [Base pre] 2025-04-01 18:49:28.124335 | 2025-04-01 18:49:28.124490 | TASK [Setup log path fact] 2025-04-01 18:49:28.148317 | orchestrator | ok 2025-04-01 18:49:28.168836 | 2025-04-01 18:49:28.168960 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-01 18:49:28.200958 | orchestrator | ok 2025-04-01 18:49:28.216242 | 2025-04-01 18:49:28.216352 | TASK [emit-job-header : Print job information] 2025-04-01 18:49:28.277105 | # Job Information 2025-04-01 18:49:28.277379 | Ansible Version: 2.15.3 2025-04-01 18:49:28.277435 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-04-01 18:49:28.277483 | Pipeline: post 2025-04-01 18:49:28.277517 | Executor: 7d211f194f6a 2025-04-01 18:49:28.277548 | Triggered by: https://github.com/osism/testbed/commit/d7b74b5c3a2cf2d5de29126c808315aa4c839539 2025-04-01 18:49:28.277579 | Event ID: 025763b8-0f2a-11f0-91ce-847b558b136b 2025-04-01 18:49:28.286847 | 2025-04-01 18:49:28.286965 | LOOP [emit-job-header : Print node information] 2025-04-01 18:49:28.438249 | orchestrator | ok: 2025-04-01 18:49:28.438434 | orchestrator | # Node Information 2025-04-01 18:49:28.438468 | orchestrator | Inventory Hostname: orchestrator 2025-04-01 18:49:28.438493 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-01 18:49:28.438515 | orchestrator | Username: zuul-testbed06 2025-04-01 18:49:28.438535 | orchestrator | Distro: Debian 12.10 2025-04-01 18:49:28.438558 | orchestrator | Provider: static-testbed 2025-04-01 18:49:28.438579 | orchestrator | Label: testbed-orchestrator 2025-04-01 18:49:28.438600 | orchestrator | Product Name: OpenStack Nova 2025-04-01 18:49:28.438619 | orchestrator | Interface IP: 81.163.193.140 2025-04-01 18:49:28.460733 | 2025-04-01 18:49:28.460851 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-01 18:49:28.924843 | orchestrator -> localhost | changed 2025-04-01 18:49:28.934135 | 2025-04-01 18:49:28.934304 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-01 18:49:29.969130 | orchestrator -> localhost | changed 2025-04-01 18:49:29.995683 | 2025-04-01 18:49:29.995815 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-01 18:49:30.302408 | orchestrator -> localhost | ok 2025-04-01 18:49:30.320165 | 2025-04-01 18:49:30.320408 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-01 18:49:30.355445 | orchestrator | ok 2025-04-01 18:49:30.373668 | orchestrator | included: /var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-01 18:49:30.382564 | 2025-04-01 18:49:30.382667 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-01 18:49:31.062587 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-01 18:49:31.063030 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/work/0cac353884db48459c0dd2a5bfbcc868_id_rsa 2025-04-01 18:49:31.063129 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/work/0cac353884db48459c0dd2a5bfbcc868_id_rsa.pub 2025-04-01 18:49:31.063224 | orchestrator -> localhost | The key fingerprint is: 2025-04-01 18:49:31.063294 | orchestrator -> localhost | SHA256:i7cACLC3E+ywqgE6R7A/Ez3fRN47wHobaUz31zrxDUg zuul-build-sshkey 2025-04-01 18:49:31.063358 | orchestrator -> localhost | The key's randomart image is: 2025-04-01 18:49:31.063417 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-01 18:49:31.063474 | orchestrator -> localhost | |. | 2025-04-01 18:49:31.063528 | orchestrator -> localhost | |.o | 2025-04-01 18:49:31.063606 | orchestrator -> localhost | |= + . | 2025-04-01 18:49:31.063665 | orchestrator -> localhost | | O = + . E | 2025-04-01 18:49:31.063721 | orchestrator -> localhost | |+ B + S o. . | 2025-04-01 18:49:31.063778 | orchestrator -> localhost | |+o o + B = o. o. | 2025-04-01 18:49:31.063852 | orchestrator -> localhost | |= = = X o . .+o| 2025-04-01 18:49:31.063911 | orchestrator -> localhost | |.+ o = + . ...o| 2025-04-01 18:49:31.063967 | orchestrator -> localhost | |. o .. | 2025-04-01 18:49:31.064024 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-01 18:49:31.064151 | orchestrator -> localhost | ok: Runtime: 0:00:00.173853 2025-04-01 18:49:31.084033 | 2025-04-01 18:49:31.084195 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-01 18:49:31.118109 | orchestrator | ok 2025-04-01 18:49:31.130090 | orchestrator | included: /var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-01 18:49:31.140948 | 2025-04-01 18:49:31.141048 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-01 18:49:31.165315 | orchestrator | skipping: Conditional result was False 2025-04-01 18:49:31.174186 | 2025-04-01 18:49:31.174290 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-01 18:49:31.742679 | orchestrator | changed 2025-04-01 18:49:31.752809 | 2025-04-01 18:49:31.752932 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-01 18:49:32.037476 | orchestrator | ok 2025-04-01 18:49:32.047539 | 2025-04-01 18:49:32.047660 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-01 18:49:32.436650 | orchestrator | ok 2025-04-01 18:49:32.446943 | 2025-04-01 18:49:32.447067 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-01 18:49:32.839543 | orchestrator | ok 2025-04-01 18:49:32.850281 | 2025-04-01 18:49:32.850407 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-01 18:49:32.888651 | orchestrator | skipping: Conditional result was False 2025-04-01 18:49:32.931340 | 2025-04-01 18:49:32.931462 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-01 18:49:33.303083 | orchestrator -> localhost | changed 2025-04-01 18:49:33.319093 | 2025-04-01 18:49:33.319234 | TASK [add-build-sshkey : Add back temp key] 2025-04-01 18:49:33.649872 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/work/0cac353884db48459c0dd2a5bfbcc868_id_rsa (zuul-build-sshkey) 2025-04-01 18:49:33.650344 | orchestrator -> localhost | ok: Runtime: 0:00:00.016770 2025-04-01 18:49:33.663928 | 2025-04-01 18:49:33.664068 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-01 18:49:34.022675 | orchestrator | ok 2025-04-01 18:49:34.031272 | 2025-04-01 18:49:34.031384 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-01 18:49:34.066230 | orchestrator | skipping: Conditional result was False 2025-04-01 18:49:34.091094 | 2025-04-01 18:49:34.091234 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-01 18:49:34.508155 | orchestrator | ok 2025-04-01 18:49:34.538000 | 2025-04-01 18:49:34.538152 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-01 18:49:34.574493 | orchestrator | ok 2025-04-01 18:49:34.583011 | 2025-04-01 18:49:34.583121 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-01 18:49:34.872845 | orchestrator -> localhost | ok 2025-04-01 18:49:34.882138 | 2025-04-01 18:49:34.882297 | TASK [validate-host : Collect information about the host] 2025-04-01 18:49:36.066832 | orchestrator | ok 2025-04-01 18:49:36.081812 | 2025-04-01 18:49:36.081923 | TASK [validate-host : Sanitize hostname] 2025-04-01 18:49:36.161621 | orchestrator | ok 2025-04-01 18:49:36.174387 | 2025-04-01 18:49:36.174528 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-01 18:49:36.714248 | orchestrator -> localhost | changed 2025-04-01 18:49:36.724164 | 2025-04-01 18:49:36.724321 | TASK [validate-host : Collect information about zuul worker] 2025-04-01 18:49:37.273116 | orchestrator | ok 2025-04-01 18:49:37.282150 | 2025-04-01 18:49:37.282285 | TASK [validate-host : Write out all zuul information for each host] 2025-04-01 18:49:37.806907 | orchestrator -> localhost | changed 2025-04-01 18:49:37.838451 | 2025-04-01 18:49:37.838580 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-01 18:49:38.121462 | orchestrator | ok 2025-04-01 18:49:38.131617 | 2025-04-01 18:49:38.131730 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-01 18:50:20.918476 | orchestrator | changed: 2025-04-01 18:50:20.918693 | orchestrator | .d..t...... src/ 2025-04-01 18:50:20.918732 | orchestrator | .d..t...... src/github.com/ 2025-04-01 18:50:20.918758 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-01 18:50:20.918779 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-01 18:50:20.918799 | orchestrator | RedHat.yml 2025-04-01 18:50:20.933451 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-01 18:50:20.933468 | orchestrator | RedHat.yml 2025-04-01 18:50:20.933520 | orchestrator | = 2.2.0"... 2025-04-01 18:50:32.684229 | orchestrator | 18:50:32.683 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-04-01 18:50:32.760126 | orchestrator | 18:50:32.759 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-04-01 18:50:33.829567 | orchestrator | 18:50:33.829 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-01 18:50:34.663974 | orchestrator | 18:50:34.663 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-01 18:50:35.318057 | orchestrator | 18:50:35.317 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-01 18:50:36.163954 | orchestrator | 18:50:36.163 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-01 18:50:37.113805 | orchestrator | 18:50:37.113 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-01 18:50:38.156574 | orchestrator | 18:50:38.156 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-01 18:50:38.156627 | orchestrator | 18:50:38.156 STDOUT terraform: Providers are signed by their developers. 2025-04-01 18:50:38.156640 | orchestrator | 18:50:38.156 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-01 18:50:38.156708 | orchestrator | 18:50:38.156 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-01 18:50:38.156737 | orchestrator | 18:50:38.156 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-01 18:50:38.156809 | orchestrator | 18:50:38.156 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-01 18:50:38.156851 | orchestrator | 18:50:38.156 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-01 18:50:38.156859 | orchestrator | 18:50:38.156 STDOUT terraform: you run "tofu init" in the future. 2025-04-01 18:50:38.156920 | orchestrator | 18:50:38.156 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-01 18:50:38.156929 | orchestrator | 18:50:38.156 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-01 18:50:38.156975 | orchestrator | 18:50:38.156 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-01 18:50:38.156983 | orchestrator | 18:50:38.156 STDOUT terraform: should now work. 2025-04-01 18:50:38.157054 | orchestrator | 18:50:38.156 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-01 18:50:38.157098 | orchestrator | 18:50:38.157 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-01 18:50:38.157151 | orchestrator | 18:50:38.157 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-01 18:50:38.321749 | orchestrator | 18:50:38.321 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-01 18:50:38.501523 | orchestrator | 18:50:38.501 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-01 18:50:38.501629 | orchestrator | 18:50:38.501 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-01 18:50:38.501768 | orchestrator | 18:50:38.501 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-01 18:50:38.501821 | orchestrator | 18:50:38.501 STDOUT terraform: for this configuration. 2025-04-01 18:50:38.704355 | orchestrator | 18:50:38.704 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-01 18:50:38.858446 | orchestrator | 18:50:38.858 STDOUT terraform: ci.auto.tfvars 2025-04-01 18:50:39.068826 | orchestrator | 18:50:39.067 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-01 18:50:39.966805 | orchestrator | 18:50:39.966 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-01 18:50:40.493924 | orchestrator | 18:50:40.493 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-01 18:50:40.749031 | orchestrator | 18:50:40.748 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-01 18:50:40.749094 | orchestrator | 18:50:40.748 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-01 18:50:40.749105 | orchestrator | 18:50:40.749 STDOUT terraform:  + create 2025-04-01 18:50:40.749156 | orchestrator | 18:50:40.749 STDOUT terraform:  <= read (data resources) 2025-04-01 18:50:40.749164 | orchestrator | 18:50:40.749 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-01 18:50:40.749173 | orchestrator | 18:50:40.749 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-01 18:50:40.749181 | orchestrator | 18:50:40.749 STDOUT terraform:  # (config refers to values not yet known) 2025-04-01 18:50:40.749215 | orchestrator | 18:50:40.749 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-01 18:50:40.749266 | orchestrator | 18:50:40.749 STDOUT terraform:  + checksum = (known after apply) 2025-04-01 18:50:40.749299 | orchestrator | 18:50:40.749 STDOUT terraform:  + created_at = (known after apply) 2025-04-01 18:50:40.749340 | orchestrator | 18:50:40.749 STDOUT terraform:  + file = (known after apply) 2025-04-01 18:50:40.749374 | orchestrator | 18:50:40.749 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.749404 | orchestrator | 18:50:40.749 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.749435 | orchestrator | 18:50:40.749 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-01 18:50:40.749463 | orchestrator | 18:50:40.749 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-01 18:50:40.749506 | orchestrator | 18:50:40.749 STDOUT terraform:  + most_recent = true 2025-04-01 18:50:40.749531 | orchestrator | 18:50:40.749 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.749553 | orchestrator | 18:50:40.749 STDOUT terraform:  + protected = (known after apply) 2025-04-01 18:50:40.749589 | orchestrator | 18:50:40.749 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.749624 | orchestrator | 18:50:40.749 STDOUT terraform:  + schema = (known after apply) 2025-04-01 18:50:40.749656 | orchestrator | 18:50:40.749 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-01 18:50:40.749689 | orchestrator | 18:50:40.749 STDOUT terraform:  + tags = (known after apply) 2025-04-01 18:50:40.749724 | orchestrator | 18:50:40.749 STDOUT terraform:  + updated_at = (known after apply) 2025-04-01 18:50:40.749732 | orchestrator | 18:50:40.749 STDOUT terraform:  } 2025-04-01 18:50:40.749784 | orchestrator | 18:50:40.749 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-01 18:50:40.749811 | orchestrator | 18:50:40.749 STDOUT terraform:  # (config refers to values not yet known) 2025-04-01 18:50:40.749851 | orchestrator | 18:50:40.749 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-01 18:50:40.749884 | orchestrator | 18:50:40.749 STDOUT terraform:  + checksum = (known after apply) 2025-04-01 18:50:40.749917 | orchestrator | 18:50:40.749 STDOUT terraform:  + created_at = (known after apply) 2025-04-01 18:50:40.749946 | orchestrator | 18:50:40.749 STDOUT terraform:  + file = (known after apply) 2025-04-01 18:50:40.749976 | orchestrator | 18:50:40.749 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.750006 | orchestrator | 18:50:40.749 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.750361 | orchestrator | 18:50:40.749 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-01 18:50:40.750395 | orchestrator | 18:50:40.750 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-01 18:50:40.750417 | orchestrator | 18:50:40.750 STDOUT terraform:  + most_recent = true 2025-04-01 18:50:40.750450 | orchestrator | 18:50:40.750 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.750525 | orchestrator | 18:50:40.750 STDOUT terraform:  + protected = (known after apply) 2025-04-01 18:50:40.750561 | orchestrator | 18:50:40.750 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.750570 | orchestrator | 18:50:40.750 STDOUT terraform:  + schema = (known after apply) 2025-04-01 18:50:40.750594 | orchestrator | 18:50:40.750 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-01 18:50:40.750626 | orchestrator | 18:50:40.750 STDOUT terraform:  + tags = (known after apply) 2025-04-01 18:50:40.750657 | orchestrator | 18:50:40.750 STDOUT terraform:  + updated_at = (known after apply) 2025-04-01 18:50:40.750666 | orchestrator | 18:50:40.750 STDOUT terraform:  } 2025-04-01 18:50:40.750723 | orchestrator | 18:50:40.750 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-01 18:50:40.750754 | orchestrator | 18:50:40.750 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-01 18:50:40.750792 | orchestrator | 18:50:40.750 STDOUT terraform:  + content = (known after apply) 2025-04-01 18:50:40.750833 | orchestrator | 18:50:40.750 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-01 18:50:40.750869 | orchestrator | 18:50:40.750 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-01 18:50:40.750905 | orchestrator | 18:50:40.750 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-01 18:50:40.750951 | orchestrator | 18:50:40.750 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-01 18:50:40.751004 | orchestrator | 18:50:40.750 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-01 18:50:40.751040 | orchestrator | 18:50:40.750 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-01 18:50:40.751064 | orchestrator | 18:50:40.751 STDOUT terraform:  + directory_permission = "0777" 2025-04-01 18:50:40.751090 | orchestrator | 18:50:40.751 STDOUT terraform:  + file_permission = "0644" 2025-04-01 18:50:40.751127 | orchestrator | 18:50:40.751 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-01 18:50:40.751166 | orchestrator | 18:50:40.751 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.751174 | orchestrator | 18:50:40.751 STDOUT terraform:  } 2025-04-01 18:50:40.751208 | orchestrator | 18:50:40.751 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-01 18:50:40.751234 | orchestrator | 18:50:40.751 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-01 18:50:40.751271 | orchestrator | 18:50:40.751 STDOUT terraform:  + content = (known after apply) 2025-04-01 18:50:40.751308 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-01 18:50:40.751348 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-01 18:50:40.751386 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-01 18:50:40.751420 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-01 18:50:40.751456 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-01 18:50:40.751502 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-01 18:50:40.751529 | orchestrator | 18:50:40.751 STDOUT terraform:  + directory_permission = "0777" 2025-04-01 18:50:40.751554 | orchestrator | 18:50:40.751 STDOUT terraform:  + file_permission = "0644" 2025-04-01 18:50:40.751586 | orchestrator | 18:50:40.751 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-01 18:50:40.751624 | orchestrator | 18:50:40.751 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.751633 | orchestrator | 18:50:40.751 STDOUT terraform:  } 2025-04-01 18:50:40.751663 | orchestrator | 18:50:40.751 STDOUT terraform:  # local_file.inventory will be created 2025-04-01 18:50:40.751688 | orchestrator | 18:50:40.751 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-01 18:50:40.751727 | orchestrator | 18:50:40.751 STDOUT terraform:  + content = (known after apply) 2025-04-01 18:50:40.751763 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-01 18:50:40.751799 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-01 18:50:40.751835 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-01 18:50:40.751875 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-01 18:50:40.751910 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-01 18:50:40.751947 | orchestrator | 18:50:40.751 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-01 18:50:40.751971 | orchestrator | 18:50:40.751 STDOUT terraform:  + directory_permission = "0777" 2025-04-01 18:50:40.751997 | orchestrator | 18:50:40.751 STDOUT terraform:  + file_permission = "0644" 2025-04-01 18:50:40.752032 | orchestrator | 18:50:40.751 STDOUT terraform:  + filename = "inventory.ci" 2025-04-01 18:50:40.752067 | orchestrator | 18:50:40.752 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.752075 | orchestrator | 18:50:40.752 STDOUT terraform:  } 2025-04-01 18:50:40.752109 | orchestrator | 18:50:40.752 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-01 18:50:40.752140 | orchestrator | 18:50:40.752 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-01 18:50:40.752173 | orchestrator | 18:50:40.752 STDOUT terraform:  + content = (sensitive value) 2025-04-01 18:50:40.752209 | orchestrator | 18:50:40.752 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-01 18:50:40.752244 | orchestrator | 18:50:40.752 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-01 18:50:40.752279 | orchestrator | 18:50:40.752 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-01 18:50:40.752323 | orchestrator | 18:50:40.752 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-01 18:50:40.752355 | orchestrator | 18:50:40.752 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-01 18:50:40.752392 | orchestrator | 18:50:40.752 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-01 18:50:40.752415 | orchestrator | 18:50:40.752 STDOUT terraform:  + directory_permission = "0700" 2025-04-01 18:50:40.752441 | orchestrator | 18:50:40.752 STDOUT terraform:  + file_permission = "0600" 2025-04-01 18:50:40.752472 | orchestrator | 18:50:40.752 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-01 18:50:40.752614 | orchestrator | 18:50:40.752 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.752624 | orchestrator | 18:50:40.752 STDOUT terraform:  } 2025-04-01 18:50:40.752670 | orchestrator | 18:50:40.752 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-01 18:50:40.752679 | orchestrator | 18:50:40.752 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-01 18:50:40.752709 | orchestrator | 18:50:40.752 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.752717 | orchestrator | 18:50:40.752 STDOUT terraform:  } 2025-04-01 18:50:40.752771 | orchestrator | 18:50:40.752 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-01 18:50:40.752819 | orchestrator | 18:50:40.752 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-01 18:50:40.752850 | orchestrator | 18:50:40.752 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.752872 | orchestrator | 18:50:40.752 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.752904 | orchestrator | 18:50:40.752 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.752934 | orchestrator | 18:50:40.752 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.752965 | orchestrator | 18:50:40.752 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.753006 | orchestrator | 18:50:40.752 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-01 18:50:40.753040 | orchestrator | 18:50:40.752 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.753048 | orchestrator | 18:50:40.753 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.753085 | orchestrator | 18:50:40.753 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.753093 | orchestrator | 18:50:40.753 STDOUT terraform:  } 2025-04-01 18:50:40.753145 | orchestrator | 18:50:40.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-01 18:50:40.753190 | orchestrator | 18:50:40.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-01 18:50:40.753221 | orchestrator | 18:50:40.753 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.753244 | orchestrator | 18:50:40.753 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.753276 | orchestrator | 18:50:40.753 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.753310 | orchestrator | 18:50:40.753 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.753340 | orchestrator | 18:50:40.753 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.753378 | orchestrator | 18:50:40.753 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-01 18:50:40.753410 | orchestrator | 18:50:40.753 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.753432 | orchestrator | 18:50:40.753 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.753455 | orchestrator | 18:50:40.753 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.753463 | orchestrator | 18:50:40.753 STDOUT terraform:  } 2025-04-01 18:50:40.753535 | orchestrator | 18:50:40.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-01 18:50:40.753580 | orchestrator | 18:50:40.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-01 18:50:40.753611 | orchestrator | 18:50:40.753 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.753633 | orchestrator | 18:50:40.753 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.753665 | orchestrator | 18:50:40.753 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.753708 | orchestrator | 18:50:40.753 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.753740 | orchestrator | 18:50:40.753 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.753781 | orchestrator | 18:50:40.753 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-01 18:50:40.753815 | orchestrator | 18:50:40.753 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.753829 | orchestrator | 18:50:40.753 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.753855 | orchestrator | 18:50:40.753 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.753863 | orchestrator | 18:50:40.753 STDOUT terraform:  } 2025-04-01 18:50:40.753914 | orchestrator | 18:50:40.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-01 18:50:40.753958 | orchestrator | 18:50:40.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-01 18:50:40.753990 | orchestrator | 18:50:40.753 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.754010 | orchestrator | 18:50:40.753 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.754064 | orchestrator | 18:50:40.754 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.754098 | orchestrator | 18:50:40.754 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.754129 | orchestrator | 18:50:40.754 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.754169 | orchestrator | 18:50:40.754 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-01 18:50:40.754201 | orchestrator | 18:50:40.754 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.754222 | orchestrator | 18:50:40.754 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.754248 | orchestrator | 18:50:40.754 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.754256 | orchestrator | 18:50:40.754 STDOUT terraform:  } 2025-04-01 18:50:40.754306 | orchestrator | 18:50:40.754 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-01 18:50:40.754356 | orchestrator | 18:50:40.754 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-01 18:50:40.754386 | orchestrator | 18:50:40.754 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.754407 | orchestrator | 18:50:40.754 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.754440 | orchestrator | 18:50:40.754 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.754471 | orchestrator | 18:50:40.754 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.754514 | orchestrator | 18:50:40.754 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.754550 | orchestrator | 18:50:40.754 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-01 18:50:40.754581 | orchestrator | 18:50:40.754 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.754604 | orchestrator | 18:50:40.754 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.754629 | orchestrator | 18:50:40.754 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.754637 | orchestrator | 18:50:40.754 STDOUT terraform:  } 2025-04-01 18:50:40.754686 | orchestrator | 18:50:40.754 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-01 18:50:40.754732 | orchestrator | 18:50:40.754 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-01 18:50:40.754763 | orchestrator | 18:50:40.754 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.754777 | orchestrator | 18:50:40.754 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.754811 | orchestrator | 18:50:40.754 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.754842 | orchestrator | 18:50:40.754 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.754874 | orchestrator | 18:50:40.754 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.754913 | orchestrator | 18:50:40.754 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-01 18:50:40.754945 | orchestrator | 18:50:40.754 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.754967 | orchestrator | 18:50:40.754 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.754989 | orchestrator | 18:50:40.754 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.754996 | orchestrator | 18:50:40.754 STDOUT terraform:  } 2025-04-01 18:50:40.755044 | orchestrator | 18:50:40.754 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-01 18:50:40.755092 | orchestrator | 18:50:40.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-01 18:50:40.755124 | orchestrator | 18:50:40.755 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.755145 | orchestrator | 18:50:40.755 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.755179 | orchestrator | 18:50:40.755 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.755210 | orchestrator | 18:50:40.755 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.755242 | orchestrator | 18:50:40.755 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.755280 | orchestrator | 18:50:40.755 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-01 18:50:40.755312 | orchestrator | 18:50:40.755 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.755334 | orchestrator | 18:50:40.755 STDOUT terraform:  + size = 80 2025-04-01 18:50:40.755355 | orchestrator | 18:50:40.755 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.755363 | orchestrator | 18:50:40.755 STDOUT terraform:  } 2025-04-01 18:50:40.755411 | orchestrator | 18:50:40.755 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-01 18:50:40.755456 | orchestrator | 18:50:40.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.755507 | orchestrator | 18:50:40.755 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.755517 | orchestrator | 18:50:40.755 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.755558 | orchestrator | 18:50:40.755 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.755588 | orchestrator | 18:50:40.755 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.755625 | orchestrator | 18:50:40.755 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-01 18:50:40.755658 | orchestrator | 18:50:40.755 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.755682 | orchestrator | 18:50:40.755 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.755696 | orchestrator | 18:50:40.755 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.755703 | orchestrator | 18:50:40.755 STDOUT terraform:  } 2025-04-01 18:50:40.755753 | orchestrator | 18:50:40.755 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-01 18:50:40.755797 | orchestrator | 18:50:40.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.755827 | orchestrator | 18:50:40.755 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.755849 | orchestrator | 18:50:40.755 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.755881 | orchestrator | 18:50:40.755 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.755913 | orchestrator | 18:50:40.755 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.755950 | orchestrator | 18:50:40.755 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-01 18:50:40.755983 | orchestrator | 18:50:40.755 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.756006 | orchestrator | 18:50:40.755 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.756027 | orchestrator | 18:50:40.755 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.756049 | orchestrator | 18:50:40.756 STDOUT terraform:  } 2025-04-01 18:50:40.756095 | orchestrator | 18:50:40.756 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-01 18:50:40.756140 | orchestrator | 18:50:40.756 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.756171 | orchestrator | 18:50:40.756 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.756180 | orchestrator | 18:50:40.756 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.756219 | orchestrator | 18:50:40.756 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.757018 | orchestrator | 18:50:40.756 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.757051 | orchestrator | 18:50:40.757 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-01 18:50:40.757083 | orchestrator | 18:50:40.757 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.757104 | orchestrator | 18:50:40.757 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.757132 | orchestrator | 18:50:40.757 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.757142 | orchestrator | 18:50:40.757 STDOUT terraform:  } 2025-04-01 18:50:40.757187 | orchestrator | 18:50:40.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-01 18:50:40.757231 | orchestrator | 18:50:40.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.757265 | orchestrator | 18:50:40.757 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.757289 | orchestrator | 18:50:40.757 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.757321 | orchestrator | 18:50:40.757 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.757354 | orchestrator | 18:50:40.757 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.757390 | orchestrator | 18:50:40.757 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-01 18:50:40.757421 | orchestrator | 18:50:40.757 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.757442 | orchestrator | 18:50:40.757 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.757463 | orchestrator | 18:50:40.757 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.757472 | orchestrator | 18:50:40.757 STDOUT terraform:  } 2025-04-01 18:50:40.757595 | orchestrator | 18:50:40.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-01 18:50:40.757641 | orchestrator | 18:50:40.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.757669 | orchestrator | 18:50:40.757 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.757691 | orchestrator | 18:50:40.757 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.757721 | orchestrator | 18:50:40.757 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.757753 | orchestrator | 18:50:40.757 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.757811 | orchestrator | 18:50:40.757 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-01 18:50:40.757842 | orchestrator | 18:50:40.757 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.757865 | orchestrator | 18:50:40.757 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.757886 | orchestrator | 18:50:40.757 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.757894 | orchestrator | 18:50:40.757 STDOUT terraform:  } 2025-04-01 18:50:40.757943 | orchestrator | 18:50:40.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-01 18:50:40.757987 | orchestrator | 18:50:40.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.758040 | orchestrator | 18:50:40.757 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.760086 | orchestrator | 18:50:40.758 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.760116 | orchestrator | 18:50:40.760 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.760148 | orchestrator | 18:50:40.760 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.760186 | orchestrator | 18:50:40.760 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-01 18:50:40.760218 | orchestrator | 18:50:40.760 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.760255 | orchestrator | 18:50:40.760 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.760280 | orchestrator | 18:50:40.760 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.760296 | orchestrator | 18:50:40.760 STDOUT terraform:  } 2025-04-01 18:50:40.760345 | orchestrator | 18:50:40.760 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-01 18:50:40.760391 | orchestrator | 18:50:40.760 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.760425 | orchestrator | 18:50:40.760 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.760441 | orchestrator | 18:50:40.760 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.760475 | orchestrator | 18:50:40.760 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.760518 | orchestrator | 18:50:40.760 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.760558 | orchestrator | 18:50:40.760 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-01 18:50:40.760590 | orchestrator | 18:50:40.760 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.760610 | orchestrator | 18:50:40.760 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.760633 | orchestrator | 18:50:40.760 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.760641 | orchestrator | 18:50:40.760 STDOUT terraform:  } 2025-04-01 18:50:40.760687 | orchestrator | 18:50:40.760 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-01 18:50:40.760728 | orchestrator | 18:50:40.760 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.760758 | orchestrator | 18:50:40.760 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.760780 | orchestrator | 18:50:40.760 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.760812 | orchestrator | 18:50:40.760 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.760843 | orchestrator | 18:50:40.760 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.760882 | orchestrator | 18:50:40.760 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-01 18:50:40.760915 | orchestrator | 18:50:40.760 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.760935 | orchestrator | 18:50:40.760 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.760958 | orchestrator | 18:50:40.760 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.760965 | orchestrator | 18:50:40.760 STDOUT terraform:  } 2025-04-01 18:50:40.761019 | orchestrator | 18:50:40.760 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-01 18:50:40.761062 | orchestrator | 18:50:40.761 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.761093 | orchestrator | 18:50:40.761 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.761115 | orchestrator | 18:50:40.761 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.761147 | orchestrator | 18:50:40.761 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.761176 | orchestrator | 18:50:40.761 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.761216 | orchestrator | 18:50:40.761 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-01 18:50:40.761248 | orchestrator | 18:50:40.761 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.761270 | orchestrator | 18:50:40.761 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.761291 | orchestrator | 18:50:40.761 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.761306 | orchestrator | 18:50:40.761 STDOUT terraform:  } 2025-04-01 18:50:40.761351 | orchestrator | 18:50:40.761 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-01 18:50:40.761394 | orchestrator | 18:50:40.761 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.761425 | orchestrator | 18:50:40.761 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.761446 | orchestrator | 18:50:40.761 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.761505 | orchestrator | 18:50:40.761 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.761530 | orchestrator | 18:50:40.761 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.761569 | orchestrator | 18:50:40.761 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-01 18:50:40.761601 | orchestrator | 18:50:40.761 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.761621 | orchestrator | 18:50:40.761 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.761642 | orchestrator | 18:50:40.761 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.761649 | orchestrator | 18:50:40.761 STDOUT terraform:  } 2025-04-01 18:50:40.761697 | orchestrator | 18:50:40.761 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-01 18:50:40.761742 | orchestrator | 18:50:40.761 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.761772 | orchestrator | 18:50:40.761 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.761793 | orchestrator | 18:50:40.761 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.761826 | orchestrator | 18:50:40.761 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.761859 | orchestrator | 18:50:40.761 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.761894 | orchestrator | 18:50:40.761 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-01 18:50:40.761925 | orchestrator | 18:50:40.761 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.761946 | orchestrator | 18:50:40.761 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.761966 | orchestrator | 18:50:40.761 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.761973 | orchestrator | 18:50:40.761 STDOUT terraform:  } 2025-04-01 18:50:40.762037 | orchestrator | 18:50:40.761 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-01 18:50:40.762081 | orchestrator | 18:50:40.762 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.762112 | orchestrator | 18:50:40.762 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.762132 | orchestrator | 18:50:40.762 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.762163 | orchestrator | 18:50:40.762 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.762194 | orchestrator | 18:50:40.762 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.762232 | orchestrator | 18:50:40.762 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-01 18:50:40.762262 | orchestrator | 18:50:40.762 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.762283 | orchestrator | 18:50:40.762 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.762303 | orchestrator | 18:50:40.762 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.762310 | orchestrator | 18:50:40.762 STDOUT terraform:  } 2025-04-01 18:50:40.762356 | orchestrator | 18:50:40.762 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-01 18:50:40.762399 | orchestrator | 18:50:40.762 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.762429 | orchestrator | 18:50:40.762 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.762450 | orchestrator | 18:50:40.762 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.762495 | orchestrator | 18:50:40.762 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.762527 | orchestrator | 18:50:40.762 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.762564 | orchestrator | 18:50:40.762 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-01 18:50:40.762596 | orchestrator | 18:50:40.762 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.762616 | orchestrator | 18:50:40.762 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.762637 | orchestrator | 18:50:40.762 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.762644 | orchestrator | 18:50:40.762 STDOUT terraform:  } 2025-04-01 18:50:40.762689 | orchestrator | 18:50:40.762 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-01 18:50:40.762731 | orchestrator | 18:50:40.762 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.762762 | orchestrator | 18:50:40.762 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.762783 | orchestrator | 18:50:40.762 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.762814 | orchestrator | 18:50:40.762 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.762845 | orchestrator | 18:50:40.762 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.762882 | orchestrator | 18:50:40.762 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-01 18:50:40.762913 | orchestrator | 18:50:40.762 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.762934 | orchestrator | 18:50:40.762 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.762955 | orchestrator | 18:50:40.762 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.762962 | orchestrator | 18:50:40.762 STDOUT terraform:  } 2025-04-01 18:50:40.763008 | orchestrator | 18:50:40.762 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-01 18:50:40.763054 | orchestrator | 18:50:40.763 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.763084 | orchestrator | 18:50:40.763 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.763104 | orchestrator | 18:50:40.763 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.763135 | orchestrator | 18:50:40.763 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.763175 | orchestrator | 18:50:40.763 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.763203 | orchestrator | 18:50:40.763 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-01 18:50:40.763234 | orchestrator | 18:50:40.763 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.763253 | orchestrator | 18:50:40.763 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.763274 | orchestrator | 18:50:40.763 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.763281 | orchestrator | 18:50:40.763 STDOUT terraform:  } 2025-04-01 18:50:40.763327 | orchestrator | 18:50:40.763 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-01 18:50:40.763371 | orchestrator | 18:50:40.763 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.763402 | orchestrator | 18:50:40.763 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.763490 | orchestrator | 18:50:40.763 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.763520 | orchestrator | 18:50:40.763 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.763557 | orchestrator | 18:50:40.763 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.763592 | orchestrator | 18:50:40.763 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-01 18:50:40.763623 | orchestrator | 18:50:40.763 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.763648 | orchestrator | 18:50:40.763 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.763656 | orchestrator | 18:50:40.763 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.763671 | orchestrator | 18:50:40.763 STDOUT terraform:  } 2025-04-01 18:50:40.763718 | orchestrator | 18:50:40.763 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-01 18:50:40.763759 | orchestrator | 18:50:40.763 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.763791 | orchestrator | 18:50:40.763 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.763812 | orchestrator | 18:50:40.763 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.763842 | orchestrator | 18:50:40.763 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.763873 | orchestrator | 18:50:40.763 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.763913 | orchestrator | 18:50:40.763 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-01 18:50:40.763944 | orchestrator | 18:50:40.763 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.763964 | orchestrator | 18:50:40.763 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.763987 | orchestrator | 18:50:40.763 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.763995 | orchestrator | 18:50:40.763 STDOUT terraform:  } 2025-04-01 18:50:40.764041 | orchestrator | 18:50:40.763 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-01 18:50:40.764085 | orchestrator | 18:50:40.764 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-01 18:50:40.764120 | orchestrator | 18:50:40.764 STDOUT terraform:  + attachment = (known after apply) 2025-04-01 18:50:40.764140 | orchestrator | 18:50:40.764 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.764172 | orchestrator | 18:50:40.764 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.764203 | orchestrator | 18:50:40.764 STDOUT terraform:  + metadata = (known after apply) 2025-04-01 18:50:40.764239 | orchestrator | 18:50:40.764 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-01 18:50:40.764270 | orchestrator | 18:50:40.764 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.764293 | orchestrator | 18:50:40.764 STDOUT terraform:  + size = 20 2025-04-01 18:50:40.764314 | orchestrator | 18:50:40.764 STDOUT terraform:  + volume_type = "ssd" 2025-04-01 18:50:40.764321 | orchestrator | 18:50:40.764 STDOUT terraform:  } 2025-04-01 18:50:40.764365 | orchestrator | 18:50:40.764 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-01 18:50:40.764410 | orchestrator | 18:50:40.764 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-01 18:50:40.764445 | orchestrator | 18:50:40.764 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.764489 | orchestrator | 18:50:40.764 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.764528 | orchestrator | 18:50:40.764 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.764562 | orchestrator | 18:50:40.764 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.764586 | orchestrator | 18:50:40.764 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.764609 | orchestrator | 18:50:40.764 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.764644 | orchestrator | 18:50:40.764 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.764679 | orchestrator | 18:50:40.764 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.764708 | orchestrator | 18:50:40.764 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-01 18:50:40.764732 | orchestrator | 18:50:40.764 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.764768 | orchestrator | 18:50:40.764 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.764804 | orchestrator | 18:50:40.764 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.764837 | orchestrator | 18:50:40.764 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.764867 | orchestrator | 18:50:40.764 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.764894 | orchestrator | 18:50:40.764 STDOUT terraform:  + name = "testbed-manager" 2025-04-01 18:50:40.764918 | orchestrator | 18:50:40.764 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.764953 | orchestrator | 18:50:40.764 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.764987 | orchestrator | 18:50:40.764 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.765010 | orchestrator | 18:50:40.764 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.765045 | orchestrator | 18:50:40.765 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.765082 | orchestrator | 18:50:40.765 STDOUT terraform:  + user_data = (known after apply) 2025-04-01 18:50:40.765089 | orchestrator | 18:50:40.765 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.765119 | orchestrator | 18:50:40.765 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.765176 | orchestrator | 18:50:40.765 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.765205 | orchestrator | 18:50:40.765 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.765233 | orchestrator | 18:50:40.765 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.765266 | orchestrator | 18:50:40.765 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.765304 | orchestrator | 18:50:40.765 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.765315 | orchestrator | 18:50:40.765 STDOUT terraform:  } 2025-04-01 18:50:40.765322 | orchestrator | 18:50:40.765 STDOUT terraform:  + network { 2025-04-01 18:50:40.765347 | orchestrator | 18:50:40.765 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.765378 | orchestrator | 18:50:40.765 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.765410 | orchestrator | 18:50:40.765 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.765441 | orchestrator | 18:50:40.765 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.765472 | orchestrator | 18:50:40.765 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.765517 | orchestrator | 18:50:40.765 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.765548 | orchestrator | 18:50:40.765 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.765555 | orchestrator | 18:50:40.765 STDOUT terraform:  } 2025-04-01 18:50:40.765572 | orchestrator | 18:50:40.765 STDOUT terraform:  } 2025-04-01 18:50:40.765617 | orchestrator | 18:50:40.765 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-01 18:50:40.765661 | orchestrator | 18:50:40.765 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-01 18:50:40.765696 | orchestrator | 18:50:40.765 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.765730 | orchestrator | 18:50:40.765 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.765765 | orchestrator | 18:50:40.765 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.765799 | orchestrator | 18:50:40.765 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.765823 | orchestrator | 18:50:40.765 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.765843 | orchestrator | 18:50:40.765 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.765881 | orchestrator | 18:50:40.765 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.765914 | orchestrator | 18:50:40.765 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.765943 | orchestrator | 18:50:40.765 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-01 18:50:40.765971 | orchestrator | 18:50:40.765 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.766006 | orchestrator | 18:50:40.765 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.766074 | orchestrator | 18:50:40.766 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.766105 | orchestrator | 18:50:40.766 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.766130 | orchestrator | 18:50:40.766 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.766163 | orchestrator | 18:50:40.766 STDOUT terraform:  + name = "testbed-node-0" 2025-04-01 18:50:40.766188 | orchestrator | 18:50:40.766 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.766225 | orchestrator | 18:50:40.766 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.766259 | orchestrator | 18:50:40.766 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.766284 | orchestrator | 18:50:40.766 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.766319 | orchestrator | 18:50:40.766 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.766370 | orchestrator | 18:50:40.766 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-01 18:50:40.766388 | orchestrator | 18:50:40.766 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.766412 | orchestrator | 18:50:40.766 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.766441 | orchestrator | 18:50:40.766 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.766470 | orchestrator | 18:50:40.766 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.766529 | orchestrator | 18:50:40.766 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.766558 | orchestrator | 18:50:40.766 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.766596 | orchestrator | 18:50:40.766 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.766603 | orchestrator | 18:50:40.766 STDOUT terraform:  } 2025-04-01 18:50:40.766620 | orchestrator | 18:50:40.766 STDOUT terraform:  + network { 2025-04-01 18:50:40.766641 | orchestrator | 18:50:40.766 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.766675 | orchestrator | 18:50:40.766 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.766705 | orchestrator | 18:50:40.766 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.766736 | orchestrator | 18:50:40.766 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.766767 | orchestrator | 18:50:40.766 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.766803 | orchestrator | 18:50:40.766 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.766830 | orchestrator | 18:50:40.766 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.766843 | orchestrator | 18:50:40.766 STDOUT terraform:  } 2025-04-01 18:50:40.766850 | orchestrator | 18:50:40.766 STDOUT terraform:  } 2025-04-01 18:50:40.766894 | orchestrator | 18:50:40.766 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-01 18:50:40.766935 | orchestrator | 18:50:40.766 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-01 18:50:40.766973 | orchestrator | 18:50:40.766 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.767009 | orchestrator | 18:50:40.766 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.767044 | orchestrator | 18:50:40.767 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.767079 | orchestrator | 18:50:40.767 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.767102 | orchestrator | 18:50:40.767 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.767125 | orchestrator | 18:50:40.767 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.767160 | orchestrator | 18:50:40.767 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.767196 | orchestrator | 18:50:40.767 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.767227 | orchestrator | 18:50:40.767 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-01 18:50:40.767248 | orchestrator | 18:50:40.767 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.767283 | orchestrator | 18:50:40.767 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.767318 | orchestrator | 18:50:40.767 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.767358 | orchestrator | 18:50:40.767 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.767382 | orchestrator | 18:50:40.767 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.767410 | orchestrator | 18:50:40.767 STDOUT terraform:  + name = "testbed-node-1" 2025-04-01 18:50:40.767434 | orchestrator | 18:50:40.767 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.767502 | orchestrator | 18:50:40.767 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.767536 | orchestrator | 18:50:40.767 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.767560 | orchestrator | 18:50:40.767 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.767595 | orchestrator | 18:50:40.767 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.767664 | orchestrator | 18:50:40.767 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-01 18:50:40.767673 | orchestrator | 18:50:40.767 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.767696 | orchestrator | 18:50:40.767 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.767743 | orchestrator | 18:50:40.767 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.767770 | orchestrator | 18:50:40.767 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.767819 | orchestrator | 18:50:40.767 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.767847 | orchestrator | 18:50:40.767 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.767903 | orchestrator | 18:50:40.767 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.767911 | orchestrator | 18:50:40.767 STDOUT terraform:  } 2025-04-01 18:50:40.767926 | orchestrator | 18:50:40.767 STDOUT terraform:  + network { 2025-04-01 18:50:40.767948 | orchestrator | 18:50:40.767 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.767995 | orchestrator | 18:50:40.767 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.768026 | orchestrator | 18:50:40.767 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.768074 | orchestrator | 18:50:40.768 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.768115 | orchestrator | 18:50:40.768 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.768164 | orchestrator | 18:50:40.768 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.768196 | orchestrator | 18:50:40.768 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.768223 | orchestrator | 18:50:40.768 STDOUT terraform:  } 2025-04-01 18:50:40.768231 | orchestrator | 18:50:40.768 STDOUT terraform:  } 2025-04-01 18:50:40.768298 | orchestrator | 18:50:40.768 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-01 18:50:40.768340 | orchestrator | 18:50:40.768 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-01 18:50:40.768390 | orchestrator | 18:50:40.768 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.768436 | orchestrator | 18:50:40.768 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.768508 | orchestrator | 18:50:40.768 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.768560 | orchestrator | 18:50:40.768 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.768582 | orchestrator | 18:50:40.768 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.768626 | orchestrator | 18:50:40.768 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.768661 | orchestrator | 18:50:40.768 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.768712 | orchestrator | 18:50:40.768 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.768742 | orchestrator | 18:50:40.768 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-01 18:50:40.768781 | orchestrator | 18:50:40.768 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.768818 | orchestrator | 18:50:40.768 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.768868 | orchestrator | 18:50:40.768 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.768904 | orchestrator | 18:50:40.768 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.768945 | orchestrator | 18:50:40.768 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.768976 | orchestrator | 18:50:40.768 STDOUT terraform:  + name = "testbed-node-2" 2025-04-01 18:50:40.769018 | orchestrator | 18:50:40.768 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.769053 | orchestrator | 18:50:40.769 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.769101 | orchestrator | 18:50:40.769 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.769125 | orchestrator | 18:50:40.769 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.769175 | orchestrator | 18:50:40.769 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.769239 | orchestrator | 18:50:40.769 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-01 18:50:40.769256 | orchestrator | 18:50:40.769 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.769279 | orchestrator | 18:50:40.769 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.769324 | orchestrator | 18:50:40.769 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.769353 | orchestrator | 18:50:40.769 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.769396 | orchestrator | 18:50:40.769 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.769425 | orchestrator | 18:50:40.769 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.769518 | orchestrator | 18:50:40.769 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.769526 | orchestrator | 18:50:40.769 STDOUT terraform:  } 2025-04-01 18:50:40.769531 | orchestrator | 18:50:40.769 STDOUT terraform:  + network { 2025-04-01 18:50:40.769538 | orchestrator | 18:50:40.769 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.769573 | orchestrator | 18:50:40.769 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.769603 | orchestrator | 18:50:40.769 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.769649 | orchestrator | 18:50:40.769 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.769682 | orchestrator | 18:50:40.769 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.769728 | orchestrator | 18:50:40.769 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.769759 | orchestrator | 18:50:40.769 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.769787 | orchestrator | 18:50:40.769 STDOUT terraform:  } 2025-04-01 18:50:40.769794 | orchestrator | 18:50:40.769 STDOUT terraform:  } 2025-04-01 18:50:40.769841 | orchestrator | 18:50:40.769 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-01 18:50:40.769898 | orchestrator | 18:50:40.769 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-01 18:50:40.769949 | orchestrator | 18:50:40.769 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.769982 | orchestrator | 18:50:40.769 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.770042 | orchestrator | 18:50:40.769 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.770103 | orchestrator | 18:50:40.770 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.770129 | orchestrator | 18:50:40.770 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.770148 | orchestrator | 18:50:40.770 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.770198 | orchestrator | 18:50:40.770 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.770234 | orchestrator | 18:50:40.770 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.770278 | orchestrator | 18:50:40.770 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-01 18:50:40.770300 | orchestrator | 18:50:40.770 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.770349 | orchestrator | 18:50:40.770 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.770387 | orchestrator | 18:50:40.770 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.770434 | orchestrator | 18:50:40.770 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.770460 | orchestrator | 18:50:40.770 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.770520 | orchestrator | 18:50:40.770 STDOUT terraform:  + name = "testbed-node-3" 2025-04-01 18:50:40.770544 | orchestrator | 18:50:40.770 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.770594 | orchestrator | 18:50:40.770 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.770644 | orchestrator | 18:50:40.770 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.770667 | orchestrator | 18:50:40.770 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.770703 | orchestrator | 18:50:40.770 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.770766 | orchestrator | 18:50:40.770 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-01 18:50:40.770784 | orchestrator | 18:50:40.770 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.770822 | orchestrator | 18:50:40.770 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.770852 | orchestrator | 18:50:40.770 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.770901 | orchestrator | 18:50:40.770 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.770928 | orchestrator | 18:50:40.770 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.770975 | orchestrator | 18:50:40.770 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.771014 | orchestrator | 18:50:40.770 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.771045 | orchestrator | 18:50:40.771 STDOUT terraform:  } 2025-04-01 18:50:40.771061 | orchestrator | 18:50:40.771 STDOUT terraform:  + network { 2025-04-01 18:50:40.771084 | orchestrator | 18:50:40.771 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.771131 | orchestrator | 18:50:40.771 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.771161 | orchestrator | 18:50:40.771 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.771207 | orchestrator | 18:50:40.771 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.771238 | orchestrator | 18:50:40.771 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.771288 | orchestrator | 18:50:40.771 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.771319 | orchestrator | 18:50:40.771 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.771327 | orchestrator | 18:50:40.771 STDOUT terraform:  } 2025-04-01 18:50:40.771359 | orchestrator | 18:50:40.771 STDOUT terraform:  } 2025-04-01 18:50:40.771401 | orchestrator | 18:50:40.771 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-01 18:50:40.771458 | orchestrator | 18:50:40.771 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-01 18:50:40.771532 | orchestrator | 18:50:40.771 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.771567 | orchestrator | 18:50:40.771 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.771618 | orchestrator | 18:50:40.771 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.771660 | orchestrator | 18:50:40.771 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.771694 | orchestrator | 18:50:40.771 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.771722 | orchestrator | 18:50:40.771 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.771756 | orchestrator | 18:50:40.771 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.771792 | orchestrator | 18:50:40.771 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.771821 | orchestrator | 18:50:40.771 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-01 18:50:40.771860 | orchestrator | 18:50:40.771 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.771895 | orchestrator | 18:50:40.771 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.771931 | orchestrator | 18:50:40.771 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.771965 | orchestrator | 18:50:40.771 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.771990 | orchestrator | 18:50:40.771 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.772022 | orchestrator | 18:50:40.771 STDOUT terraform:  + name = "testbed-node-4" 2025-04-01 18:50:40.772051 | orchestrator | 18:50:40.772 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.772098 | orchestrator | 18:50:40.772 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.772133 | orchestrator | 18:50:40.772 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.772156 | orchestrator | 18:50:40.772 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.772191 | orchestrator | 18:50:40.772 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.772239 | orchestrator | 18:50:40.772 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-01 18:50:40.772256 | orchestrator | 18:50:40.772 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.772280 | orchestrator | 18:50:40.772 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.772308 | orchestrator | 18:50:40.772 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.772337 | orchestrator | 18:50:40.772 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.772367 | orchestrator | 18:50:40.772 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.772397 | orchestrator | 18:50:40.772 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.772449 | orchestrator | 18:50:40.772 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.772457 | orchestrator | 18:50:40.772 STDOUT terraform:  } 2025-04-01 18:50:40.772474 | orchestrator | 18:50:40.772 STDOUT terraform:  + network { 2025-04-01 18:50:40.772507 | orchestrator | 18:50:40.772 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.772540 | orchestrator | 18:50:40.772 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.772567 | orchestrator | 18:50:40.772 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.772599 | orchestrator | 18:50:40.772 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.772630 | orchestrator | 18:50:40.772 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.772661 | orchestrator | 18:50:40.772 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.772693 | orchestrator | 18:50:40.772 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.772701 | orchestrator | 18:50:40.772 STDOUT terraform:  } 2025-04-01 18:50:40.772715 | orchestrator | 18:50:40.772 STDOUT terraform:  } 2025-04-01 18:50:40.772758 | orchestrator | 18:50:40.772 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-01 18:50:40.772798 | orchestrator | 18:50:40.772 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-01 18:50:40.772834 | orchestrator | 18:50:40.772 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-01 18:50:40.772871 | orchestrator | 18:50:40.772 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-01 18:50:40.772905 | orchestrator | 18:50:40.772 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-01 18:50:40.772939 | orchestrator | 18:50:40.772 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.772963 | orchestrator | 18:50:40.772 STDOUT terraform:  + availability_zone = "nova" 2025-04-01 18:50:40.772985 | orchestrator | 18:50:40.772 STDOUT terraform:  + config_drive = true 2025-04-01 18:50:40.773021 | orchestrator | 18:50:40.772 STDOUT terraform:  + created = (known after apply) 2025-04-01 18:50:40.773055 | orchestrator | 18:50:40.773 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-01 18:50:40.773087 | orchestrator | 18:50:40.773 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-01 18:50:40.773107 | orchestrator | 18:50:40.773 STDOUT terraform:  + force_delete = false 2025-04-01 18:50:40.773151 | orchestrator | 18:50:40.773 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.773187 | orchestrator | 18:50:40.773 STDOUT terraform:  + image_id = (known after apply) 2025-04-01 18:50:40.773221 | orchestrator | 18:50:40.773 STDOUT terraform:  + image_name = (known after apply) 2025-04-01 18:50:40.773245 | orchestrator | 18:50:40.773 STDOUT terraform:  + key_pair = "testbed" 2025-04-01 18:50:40.773275 | orchestrator | 18:50:40.773 STDOUT terraform:  + name = "testbed-node-5" 2025-04-01 18:50:40.773301 | orchestrator | 18:50:40.773 STDOUT terraform:  + power_state = "active" 2025-04-01 18:50:40.773336 | orchestrator | 18:50:40.773 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.773370 | orchestrator | 18:50:40.773 STDOUT terraform:  + security_groups = (known after apply) 2025-04-01 18:50:40.773394 | orchestrator | 18:50:40.773 STDOUT terraform:  + stop_before_destroy = false 2025-04-01 18:50:40.773429 | orchestrator | 18:50:40.773 STDOUT terraform:  + updated = (known after apply) 2025-04-01 18:50:40.773512 | orchestrator | 18:50:40.773 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-01 18:50:40.773537 | orchestrator | 18:50:40.773 STDOUT terraform:  + block_device { 2025-04-01 18:50:40.773561 | orchestrator | 18:50:40.773 STDOUT terraform:  + boot_index = 0 2025-04-01 18:50:40.773589 | orchestrator | 18:50:40.773 STDOUT terraform:  + delete_on_termination = false 2025-04-01 18:50:40.773618 | orchestrator | 18:50:40.773 STDOUT terraform:  + destination_type = "volume" 2025-04-01 18:50:40.773647 | orchestrator | 18:50:40.773 STDOUT terraform:  + multiattach = false 2025-04-01 18:50:40.773677 | orchestrator | 18:50:40.773 STDOUT terraform:  + source_type = "volume" 2025-04-01 18:50:40.773715 | orchestrator | 18:50:40.773 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.773733 | orchestrator | 18:50:40.773 STDOUT terraform:  } 2025-04-01 18:50:40.773740 | orchestrator | 18:50:40.773 STDOUT terraform:  + network { 2025-04-01 18:50:40.773764 | orchestrator | 18:50:40.773 STDOUT terraform:  + access_network = false 2025-04-01 18:50:40.773795 | orchestrator | 18:50:40.773 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-01 18:50:40.773825 | orchestrator | 18:50:40.773 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-01 18:50:40.773855 | orchestrator | 18:50:40.773 STDOUT terraform:  + mac = (known after apply) 2025-04-01 18:50:40.773885 | orchestrator | 18:50:40.773 STDOUT terraform:  + name = (known after apply) 2025-04-01 18:50:40.773919 | orchestrator | 18:50:40.773 STDOUT terraform:  + port = (known after apply) 2025-04-01 18:50:40.773948 | orchestrator | 18:50:40.773 STDOUT terraform:  + uuid = (known after apply) 2025-04-01 18:50:40.773955 | orchestrator | 18:50:40.773 STDOUT terraform:  } 2025-04-01 18:50:40.773971 | orchestrator | 18:50:40.773 STDOUT terraform:  } 2025-04-01 18:50:40.774009 | orchestrator | 18:50:40.773 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-01 18:50:40.774055 | orchestrator | 18:50:40.774 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-01 18:50:40.774083 | orchestrator | 18:50:40.774 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-01 18:50:40.774111 | orchestrator | 18:50:40.774 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.774125 | orchestrator | 18:50:40.774 STDOUT terraform:  + name = "testbed" 2025-04-01 18:50:40.774152 | orchestrator | 18:50:40.774 STDOUT terraform:  + private_key = (sensitive value) 2025-04-01 18:50:40.774181 | orchestrator | 18:50:40.774 STDOUT terraform:  + public_key = (known after apply) 2025-04-01 18:50:40.774209 | orchestrator | 18:50:40.774 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.774240 | orchestrator | 18:50:40.774 STDOUT terraform:  + user_id = (known after apply) 2025-04-01 18:50:40.774248 | orchestrator | 18:50:40.774 STDOUT terraform:  } 2025-04-01 18:50:40.774297 | orchestrator | 18:50:40.774 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-01 18:50:40.774345 | orchestrator | 18:50:40.774 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.774375 | orchestrator | 18:50:40.774 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.774404 | orchestrator | 18:50:40.774 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.774432 | orchestrator | 18:50:40.774 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.774459 | orchestrator | 18:50:40.774 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.774501 | orchestrator | 18:50:40.774 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.774508 | orchestrator | 18:50:40.774 STDOUT terraform:  } 2025-04-01 18:50:40.774561 | orchestrator | 18:50:40.774 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-01 18:50:40.774611 | orchestrator | 18:50:40.774 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.774640 | orchestrator | 18:50:40.774 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.774668 | orchestrator | 18:50:40.774 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.774697 | orchestrator | 18:50:40.774 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.774725 | orchestrator | 18:50:40.774 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.774756 | orchestrator | 18:50:40.774 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.774764 | orchestrator | 18:50:40.774 STDOUT terraform:  } 2025-04-01 18:50:40.774814 | orchestrator | 18:50:40.774 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-01 18:50:40.774879 | orchestrator | 18:50:40.774 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.774906 | orchestrator | 18:50:40.774 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.774936 | orchestrator | 18:50:40.774 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.774964 | orchestrator | 18:50:40.774 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.774992 | orchestrator | 18:50:40.774 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.775020 | orchestrator | 18:50:40.774 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.775032 | orchestrator | 18:50:40.775 STDOUT terraform:  } 2025-04-01 18:50:40.775077 | orchestrator | 18:50:40.775 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-01 18:50:40.775125 | orchestrator | 18:50:40.775 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.775153 | orchestrator | 18:50:40.775 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.775181 | orchestrator | 18:50:40.775 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.775209 | orchestrator | 18:50:40.775 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.775238 | orchestrator | 18:50:40.775 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.775267 | orchestrator | 18:50:40.775 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.775274 | orchestrator | 18:50:40.775 STDOUT terraform:  } 2025-04-01 18:50:40.775325 | orchestrator | 18:50:40.775 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-01 18:50:40.775372 | orchestrator | 18:50:40.775 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.775400 | orchestrator | 18:50:40.775 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.775429 | orchestrator | 18:50:40.775 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.775457 | orchestrator | 18:50:40.775 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.775495 | orchestrator | 18:50:40.775 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.775522 | orchestrator | 18:50:40.775 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.775530 | orchestrator | 18:50:40.775 STDOUT terraform:  } 2025-04-01 18:50:40.775581 | orchestrator | 18:50:40.775 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-01 18:50:40.775630 | orchestrator | 18:50:40.775 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.775657 | orchestrator | 18:50:40.775 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.775687 | orchestrator | 18:50:40.775 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.775715 | orchestrator | 18:50:40.775 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.775742 | orchestrator | 18:50:40.775 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.775770 | orchestrator | 18:50:40.775 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.775778 | orchestrator | 18:50:40.775 STDOUT terraform:  } 2025-04-01 18:50:40.775828 | orchestrator | 18:50:40.775 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-01 18:50:40.775875 | orchestrator | 18:50:40.775 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.775903 | orchestrator | 18:50:40.775 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.775933 | orchestrator | 18:50:40.775 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.775960 | orchestrator | 18:50:40.775 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.775989 | orchestrator | 18:50:40.775 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.776015 | orchestrator | 18:50:40.775 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.776023 | orchestrator | 18:50:40.776 STDOUT terraform:  } 2025-04-01 18:50:40.776074 | orchestrator | 18:50:40.776 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-01 18:50:40.776123 | orchestrator | 18:50:40.776 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.776151 | orchestrator | 18:50:40.776 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.776180 | orchestrator | 18:50:40.776 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.776207 | orchestrator | 18:50:40.776 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.776240 | orchestrator | 18:50:40.776 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.776264 | orchestrator | 18:50:40.776 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.776271 | orchestrator | 18:50:40.776 STDOUT terraform:  } 2025-04-01 18:50:40.776324 | orchestrator | 18:50:40.776 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-01 18:50:40.776371 | orchestrator | 18:50:40.776 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.776399 | orchestrator | 18:50:40.776 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.776429 | orchestrator | 18:50:40.776 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.776456 | orchestrator | 18:50:40.776 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.776513 | orchestrator | 18:50:40.776 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.776535 | orchestrator | 18:50:40.776 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.776543 | orchestrator | 18:50:40.776 STDOUT terraform:  } 2025-04-01 18:50:40.776594 | orchestrator | 18:50:40.776 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-01 18:50:40.776643 | orchestrator | 18:50:40.776 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.776672 | orchestrator | 18:50:40.776 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.776700 | orchestrator | 18:50:40.776 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.776728 | orchestrator | 18:50:40.776 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.776756 | orchestrator | 18:50:40.776 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.776783 | orchestrator | 18:50:40.776 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.776791 | orchestrator | 18:50:40.776 STDOUT terraform:  } 2025-04-01 18:50:40.776844 | orchestrator | 18:50:40.776 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-01 18:50:40.776892 | orchestrator | 18:50:40.776 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.776920 | orchestrator | 18:50:40.776 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.776949 | orchestrator | 18:50:40.776 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.776977 | orchestrator | 18:50:40.776 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.777005 | orchestrator | 18:50:40.776 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.777034 | orchestrator | 18:50:40.777 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.777041 | orchestrator | 18:50:40.777 STDOUT terraform:  } 2025-04-01 18:50:40.777092 | orchestrator | 18:50:40.777 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-01 18:50:40.777139 | orchestrator | 18:50:40.777 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.777166 | orchestrator | 18:50:40.777 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.777194 | orchestrator | 18:50:40.777 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.777224 | orchestrator | 18:50:40.777 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.777253 | orchestrator | 18:50:40.777 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.777280 | orchestrator | 18:50:40.777 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.777287 | orchestrator | 18:50:40.777 STDOUT terraform:  } 2025-04-01 18:50:40.777338 | orchestrator | 18:50:40.777 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-01 18:50:40.777386 | orchestrator | 18:50:40.777 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.777414 | orchestrator | 18:50:40.777 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.777443 | orchestrator | 18:50:40.777 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.777535 | orchestrator | 18:50:40.777 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.777552 | orchestrator | 18:50:40.777 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.777558 | orchestrator | 18:50:40.777 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.777564 | orchestrator | 18:50:40.777 STDOUT terraform:  } 2025-04-01 18:50:40.777603 | orchestrator | 18:50:40.777 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-01 18:50:40.777650 | orchestrator | 18:50:40.777 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.777680 | orchestrator | 18:50:40.777 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.777713 | orchestrator | 18:50:40.777 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.777736 | orchestrator | 18:50:40.777 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.777764 | orchestrator | 18:50:40.777 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.777798 | orchestrator | 18:50:40.777 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.777810 | orchestrator | 18:50:40.777 STDOUT terraform:  } 2025-04-01 18:50:40.777856 | orchestrator | 18:50:40.777 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-01 18:50:40.777903 | orchestrator | 18:50:40.777 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.777932 | orchestrator | 18:50:40.777 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.777961 | orchestrator | 18:50:40.777 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.777989 | orchestrator | 18:50:40.777 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.778050 | orchestrator | 18:50:40.777 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.778072 | orchestrator | 18:50:40.778 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.778082 | orchestrator | 18:50:40.778 STDOUT terraform:  } 2025-04-01 18:50:40.778187 | orchestrator | 18:50:40.778 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-01 18:50:40.778235 | orchestrator | 18:50:40.778 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.778264 | orchestrator | 18:50:40.778 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.778293 | orchestrator | 18:50:40.778 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.778322 | orchestrator | 18:50:40.778 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.778350 | orchestrator | 18:50:40.778 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.778378 | orchestrator | 18:50:40.778 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.778385 | orchestrator | 18:50:40.778 STDOUT terraform:  } 2025-04-01 18:50:40.778441 | orchestrator | 18:50:40.778 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-01 18:50:40.778522 | orchestrator | 18:50:40.778 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.778559 | orchestrator | 18:50:40.778 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.778567 | orchestrator | 18:50:40.778 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.778573 | orchestrator | 18:50:40.778 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.778604 | orchestrator | 18:50:40.778 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.778632 | orchestrator | 18:50:40.778 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.778640 | orchestrator | 18:50:40.778 STDOUT terraform:  } 2025-04-01 18:50:40.778691 | orchestrator | 18:50:40.778 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-01 18:50:40.778739 | orchestrator | 18:50:40.778 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-01 18:50:40.778767 | orchestrator | 18:50:40.778 STDOUT terraform:  + device = (known after apply) 2025-04-01 18:50:40.778797 | orchestrator | 18:50:40.778 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.778825 | orchestrator | 18:50:40.778 STDOUT terraform:  + instance_id = (known after apply) 2025-04-01 18:50:40.778853 | orchestrator | 18:50:40.778 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.778880 | orchestrator | 18:50:40.778 STDOUT terraform:  + volume_id = (known after apply) 2025-04-01 18:50:40.778887 | orchestrator | 18:50:40.778 STDOUT terraform:  } 2025-04-01 18:50:40.778945 | orchestrator | 18:50:40.778 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-01 18:50:40.779000 | orchestrator | 18:50:40.778 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-01 18:50:40.779029 | orchestrator | 18:50:40.778 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-01 18:50:40.779056 | orchestrator | 18:50:40.779 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-01 18:50:40.779086 | orchestrator | 18:50:40.779 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.779114 | orchestrator | 18:50:40.779 STDOUT terraform:  + port_id = (known after apply) 2025-04-01 18:50:40.779145 | orchestrator | 18:50:40.779 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.779153 | orchestrator | 18:50:40.779 STDOUT terraform:  } 2025-04-01 18:50:40.779198 | orchestrator | 18:50:40.779 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-01 18:50:40.779244 | orchestrator | 18:50:40.779 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-01 18:50:40.779270 | orchestrator | 18:50:40.779 STDOUT terraform:  + address = (known after apply) 2025-04-01 18:50:40.779294 | orchestrator | 18:50:40.779 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.779320 | orchestrator | 18:50:40.779 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-01 18:50:40.779345 | orchestrator | 18:50:40.779 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.779370 | orchestrator | 18:50:40.779 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-01 18:50:40.779395 | orchestrator | 18:50:40.779 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.779415 | orchestrator | 18:50:40.779 STDOUT terraform:  + pool = "public" 2025-04-01 18:50:40.779440 | orchestrator | 18:50:40.779 STDOUT terraform:  + port_id = (known after apply) 2025-04-01 18:50:40.779465 | orchestrator | 18:50:40.779 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.779511 | orchestrator | 18:50:40.779 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.779536 | orchestrator | 18:50:40.779 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.779543 | orchestrator | 18:50:40.779 STDOUT terraform:  } 2025-04-01 18:50:40.779590 | orchestrator | 18:50:40.779 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-01 18:50:40.779633 | orchestrator | 18:50:40.779 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-01 18:50:40.779670 | orchestrator | 18:50:40.779 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.779706 | orchestrator | 18:50:40.779 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.779729 | orchestrator | 18:50:40.779 STDOUT terraform:  + availability_zone_hints = [ 2025-04-01 18:50:40.779736 | orchestrator | 18:50:40.779 STDOUT terraform:  + "nova", 2025-04-01 18:50:40.779752 | orchestrator | 18:50:40.779 STDOUT terraform:  ] 2025-04-01 18:50:40.779788 | orchestrator | 18:50:40.779 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-01 18:50:40.779826 | orchestrator | 18:50:40.779 STDOUT terraform:  + external = (known after apply) 2025-04-01 18:50:40.779862 | orchestrator | 18:50:40.779 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.779900 | orchestrator | 18:50:40.779 STDOUT terraform:  + mtu = (known after apply) 2025-04-01 18:50:40.779940 | orchestrator | 18:50:40.779 STDOUT terraform:  + name = "net-testbed-management" 2025-04-01 18:50:40.779975 | orchestrator | 18:50:40.779 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.780012 | orchestrator | 18:50:40.779 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.780049 | orchestrator | 18:50:40.780 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.780086 | orchestrator | 18:50:40.780 STDOUT terraform:  + shared = (known after apply) 2025-04-01 18:50:40.780130 | orchestrator | 18:50:40.780 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.780159 | orchestrator | 18:50:40.780 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-01 18:50:40.780182 | orchestrator | 18:50:40.780 STDOUT terraform:  + segments (known after apply) 2025-04-01 18:50:40.780190 | orchestrator | 18:50:40.780 STDOUT terraform:  } 2025-04-01 18:50:40.780241 | orchestrator | 18:50:40.780 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-01 18:50:40.780287 | orchestrator | 18:50:40.780 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-01 18:50:40.780322 | orchestrator | 18:50:40.780 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.780358 | orchestrator | 18:50:40.780 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.780393 | orchestrator | 18:50:40.780 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.780428 | orchestrator | 18:50:40.780 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.780464 | orchestrator | 18:50:40.780 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.780522 | orchestrator | 18:50:40.780 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.780548 | orchestrator | 18:50:40.780 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.780583 | orchestrator | 18:50:40.780 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.780620 | orchestrator | 18:50:40.780 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.780655 | orchestrator | 18:50:40.780 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.780695 | orchestrator | 18:50:40.780 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.780726 | orchestrator | 18:50:40.780 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.780763 | orchestrator | 18:50:40.780 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.780798 | orchestrator | 18:50:40.780 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.780834 | orchestrator | 18:50:40.780 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.780869 | orchestrator | 18:50:40.780 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.780892 | orchestrator | 18:50:40.780 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.780922 | orchestrator | 18:50:40.780 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.780929 | orchestrator | 18:50:40.780 STDOUT terraform:  } 2025-04-01 18:50:40.780951 | orchestrator | 18:50:40.780 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.780981 | orchestrator | 18:50:40.780 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.780988 | orchestrator | 18:50:40.780 STDOUT terraform:  } 2025-04-01 18:50:40.781013 | orchestrator | 18:50:40.780 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.781021 | orchestrator | 18:50:40.781 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.781048 | orchestrator | 18:50:40.781 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-01 18:50:40.781078 | orchestrator | 18:50:40.781 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.781085 | orchestrator | 18:50:40.781 STDOUT terraform:  } 2025-04-01 18:50:40.781099 | orchestrator | 18:50:40.781 STDOUT terraform:  } 2025-04-01 18:50:40.781146 | orchestrator | 18:50:40.781 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-01 18:50:40.781190 | orchestrator | 18:50:40.781 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-01 18:50:40.781226 | orchestrator | 18:50:40.781 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.781262 | orchestrator | 18:50:40.781 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.781299 | orchestrator | 18:50:40.781 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.781335 | orchestrator | 18:50:40.781 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.781371 | orchestrator | 18:50:40.781 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.781407 | orchestrator | 18:50:40.781 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.781442 | orchestrator | 18:50:40.781 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.781491 | orchestrator | 18:50:40.781 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.781534 | orchestrator | 18:50:40.781 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.781569 | orchestrator | 18:50:40.781 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.781605 | orchestrator | 18:50:40.781 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.781641 | orchestrator | 18:50:40.781 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.781676 | orchestrator | 18:50:40.781 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.781713 | orchestrator | 18:50:40.781 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.781748 | orchestrator | 18:50:40.781 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.781783 | orchestrator | 18:50:40.781 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.781802 | orchestrator | 18:50:40.781 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.781834 | orchestrator | 18:50:40.781 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.781841 | orchestrator | 18:50:40.781 STDOUT terraform:  } 2025-04-01 18:50:40.781862 | orchestrator | 18:50:40.781 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.781891 | orchestrator | 18:50:40.781 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-01 18:50:40.781898 | orchestrator | 18:50:40.781 STDOUT terraform:  } 2025-04-01 18:50:40.782871 | orchestrator | 18:50:40.781 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.782893 | orchestrator | 18:50:40.782 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.782900 | orchestrator | 18:50:40.782 STDOUT terraform:  } 2025-04-01 18:50:40.782919 | orchestrator | 18:50:40.782 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.782948 | orchestrator | 18:50:40.782 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-01 18:50:40.782955 | orchestrator | 18:50:40.782 STDOUT terraform:  } 2025-04-01 18:50:40.782984 | orchestrator | 18:50:40.782 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.783001 | orchestrator | 18:50:40.782 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.783027 | orchestrator | 18:50:40.782 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-01 18:50:40.783056 | orchestrator | 18:50:40.783 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.783063 | orchestrator | 18:50:40.783 STDOUT terraform:  } 2025-04-01 18:50:40.783080 | orchestrator | 18:50:40.783 STDOUT terraform:  } 2025-04-01 18:50:40.783126 | orchestrator | 18:50:40.783 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-01 18:50:40.783171 | orchestrator | 18:50:40.783 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-01 18:50:40.783205 | orchestrator | 18:50:40.783 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.783241 | orchestrator | 18:50:40.783 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.783276 | orchestrator | 18:50:40.783 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.783315 | orchestrator | 18:50:40.783 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.783349 | orchestrator | 18:50:40.783 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.783385 | orchestrator | 18:50:40.783 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.783423 | orchestrator | 18:50:40.783 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.783460 | orchestrator | 18:50:40.783 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.783509 | orchestrator | 18:50:40.783 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.783544 | orchestrator | 18:50:40.783 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.783746 | orchestrator | 18:50:40.783 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.783794 | orchestrator | 18:50:40.783 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.783835 | orchestrator | 18:50:40.783 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.783874 | orchestrator | 18:50:40.783 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.783911 | orchestrator | 18:50:40.783 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.783948 | orchestrator | 18:50:40.783 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.783971 | orchestrator | 18:50:40.783 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.784003 | orchestrator | 18:50:40.783 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.784010 | orchestrator | 18:50:40.783 STDOUT terraform:  } 2025-04-01 18:50:40.784034 | orchestrator | 18:50:40.784 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.784064 | orchestrator | 18:50:40.784 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-01 18:50:40.784079 | orchestrator | 18:50:40.784 STDOUT terraform:  } 2025-04-01 18:50:40.784100 | orchestrator | 18:50:40.784 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.784129 | orchestrator | 18:50:40.784 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.784137 | orchestrator | 18:50:40.784 STDOUT terraform:  } 2025-04-01 18:50:40.784159 | orchestrator | 18:50:40.784 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.784188 | orchestrator | 18:50:40.784 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-01 18:50:40.784195 | orchestrator | 18:50:40.784 STDOUT terraform:  } 2025-04-01 18:50:40.784222 | orchestrator | 18:50:40.784 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.784236 | orchestrator | 18:50:40.784 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.784263 | orchestrator | 18:50:40.784 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-01 18:50:40.784293 | orchestrator | 18:50:40.784 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.784308 | orchestrator | 18:50:40.784 STDOUT terraform:  } 2025-04-01 18:50:40.784324 | orchestrator | 18:50:40.784 STDOUT terraform:  } 2025-04-01 18:50:40.784373 | orchestrator | 18:50:40.784 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-01 18:50:40.784418 | orchestrator | 18:50:40.784 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-01 18:50:40.784454 | orchestrator | 18:50:40.784 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.784524 | orchestrator | 18:50:40.784 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.784558 | orchestrator | 18:50:40.784 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.784596 | orchestrator | 18:50:40.784 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.784633 | orchestrator | 18:50:40.784 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.784670 | orchestrator | 18:50:40.784 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.784707 | orchestrator | 18:50:40.784 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.784745 | orchestrator | 18:50:40.784 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.784782 | orchestrator | 18:50:40.784 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.784818 | orchestrator | 18:50:40.784 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.784854 | orchestrator | 18:50:40.784 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.784891 | orchestrator | 18:50:40.784 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.784926 | orchestrator | 18:50:40.784 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.784967 | orchestrator | 18:50:40.784 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.784999 | orchestrator | 18:50:40.784 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.785037 | orchestrator | 18:50:40.784 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.785058 | orchestrator | 18:50:40.785 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.785089 | orchestrator | 18:50:40.785 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.785096 | orchestrator | 18:50:40.785 STDOUT terraform:  } 2025-04-01 18:50:40.785118 | orchestrator | 18:50:40.785 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.785150 | orchestrator | 18:50:40.785 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-01 18:50:40.785165 | orchestrator | 18:50:40.785 STDOUT terraform:  } 2025-04-01 18:50:40.785186 | orchestrator | 18:50:40.785 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.785215 | orchestrator | 18:50:40.785 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.785222 | orchestrator | 18:50:40.785 STDOUT terraform:  } 2025-04-01 18:50:40.785245 | orchestrator | 18:50:40.785 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.785274 | orchestrator | 18:50:40.785 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-01 18:50:40.785289 | orchestrator | 18:50:40.785 STDOUT terraform:  } 2025-04-01 18:50:40.785315 | orchestrator | 18:50:40.785 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.785324 | orchestrator | 18:50:40.785 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.785357 | orchestrator | 18:50:40.785 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-01 18:50:40.785385 | orchestrator | 18:50:40.785 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.785393 | orchestrator | 18:50:40.785 STDOUT terraform:  } 2025-04-01 18:50:40.785409 | orchestrator | 18:50:40.785 STDOUT terraform:  } 2025-04-01 18:50:40.785457 | orchestrator | 18:50:40.785 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-01 18:50:40.785513 | orchestrator | 18:50:40.785 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-01 18:50:40.785549 | orchestrator | 18:50:40.785 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.785586 | orchestrator | 18:50:40.785 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.785622 | orchestrator | 18:50:40.785 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.785659 | orchestrator | 18:50:40.785 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.785696 | orchestrator | 18:50:40.785 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.785732 | orchestrator | 18:50:40.785 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.785768 | orchestrator | 18:50:40.785 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.785804 | orchestrator | 18:50:40.785 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.785841 | orchestrator | 18:50:40.785 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.785877 | orchestrator | 18:50:40.785 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.785913 | orchestrator | 18:50:40.785 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.785951 | orchestrator | 18:50:40.785 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.785987 | orchestrator | 18:50:40.785 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.786048 | orchestrator | 18:50:40.785 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.786083 | orchestrator | 18:50:40.786 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.786120 | orchestrator | 18:50:40.786 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.786141 | orchestrator | 18:50:40.786 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.786172 | orchestrator | 18:50:40.786 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.786180 | orchestrator | 18:50:40.786 STDOUT terraform:  } 2025-04-01 18:50:40.786203 | orchestrator | 18:50:40.786 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.786233 | orchestrator | 18:50:40.786 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-01 18:50:40.786240 | orchestrator | 18:50:40.786 STDOUT terraform:  } 2025-04-01 18:50:40.786270 | orchestrator | 18:50:40.786 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.786292 | orchestrator | 18:50:40.786 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.786307 | orchestrator | 18:50:40.786 STDOUT terraform:  } 2025-04-01 18:50:40.786321 | orchestrator | 18:50:40.786 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.786351 | orchestrator | 18:50:40.786 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-01 18:50:40.786358 | orchestrator | 18:50:40.786 STDOUT terraform:  } 2025-04-01 18:50:40.786385 | orchestrator | 18:50:40.786 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.786399 | orchestrator | 18:50:40.786 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.786426 | orchestrator | 18:50:40.786 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-01 18:50:40.786456 | orchestrator | 18:50:40.786 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.786463 | orchestrator | 18:50:40.786 STDOUT terraform:  } 2025-04-01 18:50:40.786516 | orchestrator | 18:50:40.786 STDOUT terraform:  } 2025-04-01 18:50:40.786539 | orchestrator | 18:50:40.786 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-01 18:50:40.786586 | orchestrator | 18:50:40.786 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-01 18:50:40.786624 | orchestrator | 18:50:40.786 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.786658 | orchestrator | 18:50:40.786 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.786693 | orchestrator | 18:50:40.786 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.786729 | orchestrator | 18:50:40.786 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.786766 | orchestrator | 18:50:40.786 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.786801 | orchestrator | 18:50:40.786 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.786836 | orchestrator | 18:50:40.786 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.786877 | orchestrator | 18:50:40.786 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.786912 | orchestrator | 18:50:40.786 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.786948 | orchestrator | 18:50:40.786 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.786983 | orchestrator | 18:50:40.786 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.787018 | orchestrator | 18:50:40.786 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.787055 | orchestrator | 18:50:40.787 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.787091 | orchestrator | 18:50:40.787 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.787128 | orchestrator | 18:50:40.787 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.787165 | orchestrator | 18:50:40.787 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.787185 | orchestrator | 18:50:40.787 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.787214 | orchestrator | 18:50:40.787 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.787225 | orchestrator | 18:50:40.787 STDOUT terraform:  } 2025-04-01 18:50:40.787244 | orchestrator | 18:50:40.787 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.787269 | orchestrator | 18:50:40.787 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-01 18:50:40.787276 | orchestrator | 18:50:40.787 STDOUT terraform:  } 2025-04-01 18:50:40.787298 | orchestrator | 18:50:40.787 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.787326 | orchestrator | 18:50:40.787 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.787333 | orchestrator | 18:50:40.787 STDOUT terraform:  } 2025-04-01 18:50:40.787355 | orchestrator | 18:50:40.787 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.787384 | orchestrator | 18:50:40.787 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-01 18:50:40.787391 | orchestrator | 18:50:40.787 STDOUT terraform:  } 2025-04-01 18:50:40.787418 | orchestrator | 18:50:40.787 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.787425 | orchestrator | 18:50:40.787 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.787453 | orchestrator | 18:50:40.787 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-01 18:50:40.787503 | orchestrator | 18:50:40.787 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.787511 | orchestrator | 18:50:40.787 STDOUT terraform:  } 2025-04-01 18:50:40.787517 | orchestrator | 18:50:40.787 STDOUT terraform:  } 2025-04-01 18:50:40.787566 | orchestrator | 18:50:40.787 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-01 18:50:40.787612 | orchestrator | 18:50:40.787 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-01 18:50:40.787647 | orchestrator | 18:50:40.787 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.787683 | orchestrator | 18:50:40.787 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-01 18:50:40.787718 | orchestrator | 18:50:40.787 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-01 18:50:40.788026 | orchestrator | 18:50:40.787 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.788106 | orchestrator | 18:50:40.788 STDOUT terraform:  + device_id = (known after apply) 2025-04-01 18:50:40.788155 | orchestrator | 18:50:40.788 STDOUT terraform:  + device_owner = (known after apply) 2025-04-01 18:50:40.788193 | orchestrator | 18:50:40.788 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-01 18:50:40.788236 | orchestrator | 18:50:40.788 STDOUT terraform:  + dns_name = (known after apply) 2025-04-01 18:50:40.788274 | orchestrator | 18:50:40.788 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.788315 | orchestrator | 18:50:40.788 STDOUT terraform:  + mac_address = (known after apply) 2025-04-01 18:50:40.788354 | orchestrator | 18:50:40.788 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.788394 | orchestrator | 18:50:40.788 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-01 18:50:40.788433 | orchestrator | 18:50:40.788 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-01 18:50:40.788474 | orchestrator | 18:50:40.788 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.788527 | orchestrator | 18:50:40.788 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-01 18:50:40.788566 | orchestrator | 18:50:40.788 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.788588 | orchestrator | 18:50:40.788 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.788620 | orchestrator | 18:50:40.788 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-01 18:50:40.788628 | orchestrator | 18:50:40.788 STDOUT terraform:  } 2025-04-01 18:50:40.788651 | orchestrator | 18:50:40.788 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.788684 | orchestrator | 18:50:40.788 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-01 18:50:40.788700 | orchestrator | 18:50:40.788 STDOUT terraform:  } 2025-04-01 18:50:40.788725 | orchestrator | 18:50:40.788 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.788755 | orchestrator | 18:50:40.788 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-01 18:50:40.788773 | orchestrator | 18:50:40.788 STDOUT terraform:  } 2025-04-01 18:50:40.788794 | orchestrator | 18:50:40.788 STDOUT terraform:  + allowed_address_pairs { 2025-04-01 18:50:40.788828 | orchestrator | 18:50:40.788 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-01 18:50:40.788843 | orchestrator | 18:50:40.788 STDOUT terraform:  } 2025-04-01 18:50:40.788871 | orchestrator | 18:50:40.788 STDOUT terraform:  + binding (known after apply) 2025-04-01 18:50:40.788888 | orchestrator | 18:50:40.788 STDOUT terraform:  + fixed_ip { 2025-04-01 18:50:40.788918 | orchestrator | 18:50:40.788 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-01 18:50:40.788953 | orchestrator | 18:50:40.788 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.788961 | orchestrator | 18:50:40.788 STDOUT terraform:  } 2025-04-01 18:50:40.788978 | orchestrator | 18:50:40.788 STDOUT terraform:  } 2025-04-01 18:50:40.789031 | orchestrator | 18:50:40.788 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-01 18:50:40.789086 | orchestrator | 18:50:40.789 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-01 18:50:40.789104 | orchestrator | 18:50:40.789 STDOUT terraform:  + force_destroy = false 2025-04-01 18:50:40.789139 | orchestrator | 18:50:40.789 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.789169 | orchestrator | 18:50:40.789 STDOUT terraform:  + port_id = (known after apply) 2025-04-01 18:50:40.789202 | orchestrator | 18:50:40.789 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.789232 | orchestrator | 18:50:40.789 STDOUT terraform:  + router_id = (known after apply) 2025-04-01 18:50:40.789265 | orchestrator | 18:50:40.789 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-01 18:50:40.789280 | orchestrator | 18:50:40.789 STDOUT terraform:  } 2025-04-01 18:50:40.789322 | orchestrator | 18:50:40.789 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-01 18:50:40.789362 | orchestrator | 18:50:40.789 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-01 18:50:40.789401 | orchestrator | 18:50:40.789 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-01 18:50:40.789438 | orchestrator | 18:50:40.789 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.789466 | orchestrator | 18:50:40.789 STDOUT terraform:  + availability_zone_hints = [ 2025-04-01 18:50:40.789517 | orchestrator | 18:50:40.789 STDOUT terraform:  + "nova", 2025-04-01 18:50:40.789543 | orchestrator | 18:50:40.789 STDOUT terraform:  ] 2025-04-01 18:50:40.789549 | orchestrator | 18:50:40.789 STDOUT terraform:  + distributed = (known after apply) 2025-04-01 18:50:40.789586 | orchestrator | 18:50:40.789 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-01 18:50:40.789638 | orchestrator | 18:50:40.789 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-01 18:50:40.789678 | orchestrator | 18:50:40.789 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.789710 | orchestrator | 18:50:40.789 STDOUT terraform:  + name = "testbed" 2025-04-01 18:50:40.789754 | orchestrator | 18:50:40.789 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.789790 | orchestrator | 18:50:40.789 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.789823 | orchestrator | 18:50:40.789 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-01 18:50:40.789831 | orchestrator | 18:50:40.789 STDOUT terraform:  } 2025-04-01 18:50:40.789889 | orchestrator | 18:50:40.789 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-01 18:50:40.789947 | orchestrator | 18:50:40.789 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-01 18:50:40.789968 | orchestrator | 18:50:40.789 STDOUT terraform:  + description = "ssh" 2025-04-01 18:50:40.789996 | orchestrator | 18:50:40.789 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.790034 | orchestrator | 18:50:40.789 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.790089 | orchestrator | 18:50:40.790 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.790110 | orchestrator | 18:50:40.790 STDOUT terraform:  + port_range_max = 22 2025-04-01 18:50:40.790135 | orchestrator | 18:50:40.790 STDOUT terraform:  + port_range_min = 22 2025-04-01 18:50:40.790158 | orchestrator | 18:50:40.790 STDOUT terraform:  + protocol = "tcp" 2025-04-01 18:50:40.790193 | orchestrator | 18:50:40.790 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.790226 | orchestrator | 18:50:40.790 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.790256 | orchestrator | 18:50:40.790 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.790286 | orchestrator | 18:50:40.790 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.790321 | orchestrator | 18:50:40.790 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.790328 | orchestrator | 18:50:40.790 STDOUT terraform:  } 2025-04-01 18:50:40.790389 | orchestrator | 18:50:40.790 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-01 18:50:40.790445 | orchestrator | 18:50:40.790 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-01 18:50:40.790490 | orchestrator | 18:50:40.790 STDOUT terraform:  + description = "wireguard" 2025-04-01 18:50:40.790528 | orchestrator | 18:50:40.790 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.790553 | orchestrator | 18:50:40.790 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.790587 | orchestrator | 18:50:40.790 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.790609 | orchestrator | 18:50:40.790 STDOUT terraform:  + port_range_max = 51820 2025-04-01 18:50:40.790635 | orchestrator | 18:50:40.790 STDOUT terraform:  + port_range_min = 51820 2025-04-01 18:50:40.790658 | orchestrator | 18:50:40.790 STDOUT terraform:  + protocol = "udp" 2025-04-01 18:50:40.790693 | orchestrator | 18:50:40.790 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.790724 | orchestrator | 18:50:40.790 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.790754 | orchestrator | 18:50:40.790 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.790786 | orchestrator | 18:50:40.790 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.790823 | orchestrator | 18:50:40.790 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.790830 | orchestrator | 18:50:40.790 STDOUT terraform:  } 2025-04-01 18:50:40.794504 | orchestrator | 18:50:40.790 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-01 18:50:40.794704 | orchestrator | 18:50:40.794 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-01 18:50:40.794788 | orchestrator | 18:50:40.794 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.794845 | orchestrator | 18:50:40.794 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.794925 | orchestrator | 18:50:40.794 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.794971 | orchestrator | 18:50:40.794 STDOUT terraform:  + protocol = "tcp" 2025-04-01 18:50:40.795044 | orchestrator | 18:50:40.794 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.795110 | orchestrator | 18:50:40.795 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.795181 | orchestrator | 18:50:40.795 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-01 18:50:40.795246 | orchestrator | 18:50:40.795 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.795319 | orchestrator | 18:50:40.795 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.795354 | orchestrator | 18:50:40.795 STDOUT terraform:  } 2025-04-01 18:50:40.795470 | orchestrator | 18:50:40.795 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-01 18:50:40.795603 | orchestrator | 18:50:40.795 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-01 18:50:40.795656 | orchestrator | 18:50:40.795 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.795709 | orchestrator | 18:50:40.795 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.795777 | orchestrator | 18:50:40.795 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.795827 | orchestrator | 18:50:40.795 STDOUT terraform:  + protocol = "udp" 2025-04-01 18:50:40.795895 | orchestrator | 18:50:40.795 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.795964 | orchestrator | 18:50:40.795 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.796031 | orchestrator | 18:50:40.795 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-01 18:50:40.796098 | orchestrator | 18:50:40.796 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.796163 | orchestrator | 18:50:40.796 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.796199 | orchestrator | 18:50:40.796 STDOUT terraform:  } 2025-04-01 18:50:40.796312 | orchestrator | 18:50:40.796 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-01 18:50:40.796433 | orchestrator | 18:50:40.796 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-01 18:50:40.796513 | orchestrator | 18:50:40.796 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.796567 | orchestrator | 18:50:40.796 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.796634 | orchestrator | 18:50:40.796 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.796697 | orchestrator | 18:50:40.796 STDOUT terraform:  + protocol = "icmp" 2025-04-01 18:50:40.796763 | orchestrator | 18:50:40.796 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.796832 | orchestrator | 18:50:40.796 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.796878 | orchestrator | 18:50:40.796 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.796932 | orchestrator | 18:50:40.796 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.796993 | orchestrator | 18:50:40.796 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.797023 | orchestrator | 18:50:40.796 STDOUT terraform:  } 2025-04-01 18:50:40.797121 | orchestrator | 18:50:40.797 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-01 18:50:40.797212 | orchestrator | 18:50:40.797 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-01 18:50:40.797261 | orchestrator | 18:50:40.797 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.797299 | orchestrator | 18:50:40.797 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.797361 | orchestrator | 18:50:40.797 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.797398 | orchestrator | 18:50:40.797 STDOUT terraform:  + protocol = "tcp" 2025-04-01 18:50:40.797461 | orchestrator | 18:50:40.797 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.797530 | orchestrator | 18:50:40.797 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.797579 | orchestrator | 18:50:40.797 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.797632 | orchestrator | 18:50:40.797 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.797693 | orchestrator | 18:50:40.797 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.797719 | orchestrator | 18:50:40.797 STDOUT terraform:  } 2025-04-01 18:50:40.797814 | orchestrator | 18:50:40.797 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-01 18:50:40.797907 | orchestrator | 18:50:40.797 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-01 18:50:40.797956 | orchestrator | 18:50:40.797 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.797994 | orchestrator | 18:50:40.797 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.798087 | orchestrator | 18:50:40.797 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.798124 | orchestrator | 18:50:40.798 STDOUT terraform:  + protocol = "udp" 2025-04-01 18:50:40.798185 | orchestrator | 18:50:40.798 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.798240 | orchestrator | 18:50:40.798 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.798291 | orchestrator | 18:50:40.798 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.798348 | orchestrator | 18:50:40.798 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.798409 | orchestrator | 18:50:40.798 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.798434 | orchestrator | 18:50:40.798 STDOUT terraform:  } 2025-04-01 18:50:40.798562 | orchestrator | 18:50:40.798 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-01 18:50:40.798888 | orchestrator | 18:50:40.798 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-01 18:50:40.798926 | orchestrator | 18:50:40.798 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.798967 | orchestrator | 18:50:40.798 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.799022 | orchestrator | 18:50:40.798 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.799062 | orchestrator | 18:50:40.799 STDOUT terraform:  + protocol = "icmp" 2025-04-01 18:50:40.799116 | orchestrator | 18:50:40.799 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.799172 | orchestrator | 18:50:40.799 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.799216 | orchestrator | 18:50:40.799 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.799269 | orchestrator | 18:50:40.799 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.799353 | orchestrator | 18:50:40.799 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.799383 | orchestrator | 18:50:40.799 STDOUT terraform:  } 2025-04-01 18:50:40.799466 | orchestrator | 18:50:40.799 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-01 18:50:40.799569 | orchestrator | 18:50:40.799 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-01 18:50:40.799604 | orchestrator | 18:50:40.799 STDOUT terraform:  + description = "vrrp" 2025-04-01 18:50:40.799650 | orchestrator | 18:50:40.799 STDOUT terraform:  + direction = "ingress" 2025-04-01 18:50:40.799686 | orchestrator | 18:50:40.799 STDOUT terraform:  + ethertype = "IPv4" 2025-04-01 18:50:40.799738 | orchestrator | 18:50:40.799 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.799778 | orchestrator | 18:50:40.799 STDOUT terraform:  + protocol = "112" 2025-04-01 18:50:40.799830 | orchestrator | 18:50:40.799 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.799892 | orchestrator | 18:50:40.799 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-01 18:50:40.799927 | orchestrator | 18:50:40.799 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-01 18:50:40.799982 | orchestrator | 18:50:40.799 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-01 18:50:40.800034 | orchestrator | 18:50:40.799 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.800062 | orchestrator | 18:50:40.800 STDOUT terraform:  } 2025-04-01 18:50:40.800145 | orchestrator | 18:50:40.800 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-01 18:50:40.800231 | orchestrator | 18:50:40.800 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-01 18:50:40.800279 | orchestrator | 18:50:40.800 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.800341 | orchestrator | 18:50:40.800 STDOUT terraform:  + description = "management security group" 2025-04-01 18:50:40.800391 | orchestrator | 18:50:40.800 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.800443 | orchestrator | 18:50:40.800 STDOUT terraform:  + name = "testbed-management" 2025-04-01 18:50:40.800503 | orchestrator | 18:50:40.800 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.800556 | orchestrator | 18:50:40.800 STDOUT terraform:  + stateful = (known after apply) 2025-04-01 18:50:40.800604 | orchestrator | 18:50:40.800 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.800629 | orchestrator | 18:50:40.800 STDOUT terraform:  } 2025-04-01 18:50:40.800710 | orchestrator | 18:50:40.800 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-01 18:50:40.800787 | orchestrator | 18:50:40.800 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-01 18:50:40.800839 | orchestrator | 18:50:40.800 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.800889 | orchestrator | 18:50:40.800 STDOUT terraform:  + description = "node security group" 2025-04-01 18:50:40.800943 | orchestrator | 18:50:40.800 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.800984 | orchestrator | 18:50:40.800 STDOUT terraform:  + name = "testbed-node" 2025-04-01 18:50:40.801036 | orchestrator | 18:50:40.800 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.801084 | orchestrator | 18:50:40.801 STDOUT terraform:  + stateful = (known after apply) 2025-04-01 18:50:40.801136 | orchestrator | 18:50:40.801 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.801160 | orchestrator | 18:50:40.801 STDOUT terraform:  } 2025-04-01 18:50:40.801239 | orchestrator | 18:50:40.801 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-01 18:50:40.801314 | orchestrator | 18:50:40.801 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-01 18:50:40.801370 | orchestrator | 18:50:40.801 STDOUT terraform:  + all_tags = (known after apply) 2025-04-01 18:50:40.801421 | orchestrator | 18:50:40.801 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-01 18:50:40.801460 | orchestrator | 18:50:40.801 STDOUT terraform:  + dns_nameservers = [ 2025-04-01 18:50:40.801504 | orchestrator | 18:50:40.801 STDOUT terraform:  + "8.8.8.8", 2025-04-01 18:50:40.801537 | orchestrator | 18:50:40.801 STDOUT terraform:  + "9.9.9.9", 2025-04-01 18:50:40.801563 | orchestrator | 18:50:40.801 STDOUT terraform:  ] 2025-04-01 18:50:40.801601 | orchestrator | 18:50:40.801 STDOUT terraform:  + enable_dhcp = true 2025-04-01 18:50:40.801662 | orchestrator | 18:50:40.801 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-01 18:50:40.801716 | orchestrator | 18:50:40.801 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.801754 | orchestrator | 18:50:40.801 STDOUT terraform:  + ip_version = 4 2025-04-01 18:50:40.801805 | orchestrator | 18:50:40.801 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-01 18:50:40.801861 | orchestrator | 18:50:40.801 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-01 18:50:40.801926 | orchestrator | 18:50:40.801 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-01 18:50:40.801980 | orchestrator | 18:50:40.801 STDOUT terraform:  + network_id = (known after apply) 2025-04-01 18:50:40.802034 | orchestrator | 18:50:40.801 STDOUT terraform:  + no_gateway = false 2025-04-01 18:50:40.802326 | orchestrator | 18:50:40.802 STDOUT terraform:  + region = (known after apply) 2025-04-01 18:50:40.802386 | orchestrator | 18:50:40.802 STDOUT terraform:  + service_types = (known after apply) 2025-04-01 18:50:40.802433 | orchestrator | 18:50:40.802 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-01 18:50:40.802463 | orchestrator | 18:50:40.802 STDOUT terraform:  + allocation_pool { 2025-04-01 18:50:40.802516 | orchestrator | 18:50:40.802 STDOUT terraform:  + end = "192.168.31.250" 2025-04-01 18:50:40.802550 | orchestrator | 18:50:40.802 STDOUT terraform:  + start = "192.168.31.200" 2025-04-01 18:50:40.802571 | orchestrator | 18:50:40.802 STDOUT terraform:  } 2025-04-01 18:50:40.802590 | orchestrator | 18:50:40.802 STDOUT terraform:  } 2025-04-01 18:50:40.802627 | orchestrator | 18:50:40.802 STDOUT terraform:  # terraform_data.image will be created 2025-04-01 18:50:40.802664 | orchestrator | 18:50:40.802 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-01 18:50:40.802700 | orchestrator | 18:50:40.802 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.802729 | orchestrator | 18:50:40.802 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-01 18:50:40.802766 | orchestrator | 18:50:40.802 STDOUT terraform:  + output = (known after apply) 2025-04-01 18:50:40.802788 | orchestrator | 18:50:40.802 STDOUT terraform:  } 2025-04-01 18:50:40.802830 | orchestrator | 18:50:40.802 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-01 18:50:40.802873 | orchestrator | 18:50:40.802 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-01 18:50:40.802910 | orchestrator | 18:50:40.802 STDOUT terraform:  + id = (known after apply) 2025-04-01 18:50:40.802939 | orchestrator | 18:50:40.802 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-01 18:50:40.802975 | orchestrator | 18:50:40.802 STDOUT terraform:  + output = (known after apply) 2025-04-01 18:50:40.802997 | orchestrator | 18:50:40.802 STDOUT terraform:  } 2025-04-01 18:50:40.803041 | orchestrator | 18:50:40.802 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-01 18:50:40.803063 | orchestrator | 18:50:40.803 STDOUT terraform: Changes to Outputs: 2025-04-01 18:50:40.803100 | orchestrator | 18:50:40.803 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-01 18:50:40.803139 | orchestrator | 18:50:40.803 STDOUT terraform:  + private_key = (sensitive value) 2025-04-01 18:50:40.977934 | orchestrator | 18:50:40.977 STDOUT terraform: terraform_data.image: Creating... 2025-04-01 18:50:40.978005 | orchestrator | 18:50:40.977 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7bcb795a-7992-a39d-db6e-f19fbe398315] 2025-04-01 18:50:40.978079 | orchestrator | 18:50:40.977 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-01 18:50:40.978196 | orchestrator | 18:50:40.978 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=77fe8078-05da-efea-0827-5257e9e297e7] 2025-04-01 18:50:40.987228 | orchestrator | 18:50:40.987 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-01 18:50:40.992936 | orchestrator | 18:50:40.992 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-01 18:50:40.995909 | orchestrator | 18:50:40.992 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-01 18:50:40.995966 | orchestrator | 18:50:40.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-01 18:50:40.995985 | orchestrator | 18:50:40.994 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-01 18:50:40.997112 | orchestrator | 18:50:40.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-01 18:50:40.997148 | orchestrator | 18:50:40.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-01 18:50:40.997164 | orchestrator | 18:50:40.996 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-01 18:50:41.000594 | orchestrator | 18:50:40.996 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-01 18:50:41.000643 | orchestrator | 18:50:41.000 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-01 18:50:41.468423 | orchestrator | 18:50:41.467 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-01 18:50:41.471779 | orchestrator | 18:50:41.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-01 18:50:41.590227 | orchestrator | 18:50:41.589 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-01 18:50:41.594699 | orchestrator | 18:50:41.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-01 18:50:42.328219 | orchestrator | 18:50:42.327 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-04-01 18:50:42.333214 | orchestrator | 18:50:42.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-01 18:50:46.855100 | orchestrator | 18:50:46.854 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=6ffdf953-00cc-456a-bbd2-b4f1ff24317e] 2025-04-01 18:50:46.863190 | orchestrator | 18:50:46.862 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-01 18:50:50.993837 | orchestrator | 18:50:50.993 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-01 18:50:50.996138 | orchestrator | 18:50:50.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-01 18:50:50.997135 | orchestrator | 18:50:50.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-01 18:50:50.997186 | orchestrator | 18:50:50.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-01 18:50:50.997211 | orchestrator | 18:50:50.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-01 18:50:51.001366 | orchestrator | 18:50:51.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-01 18:50:51.472558 | orchestrator | 18:50:51.472 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-01 18:50:51.580270 | orchestrator | 18:50:51.579 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=f219ed29-ae42-40c1-a413-2af7dcf44905] 2025-04-01 18:50:51.580863 | orchestrator | 18:50:51.580 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=a9b8ece6-9486-4a7c-9bf5-40c217f02d2d] 2025-04-01 18:50:51.587901 | orchestrator | 18:50:51.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-01 18:50:51.588933 | orchestrator | 18:50:51.588 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-01 18:50:51.595044 | orchestrator | 18:50:51.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-01 18:50:51.604897 | orchestrator | 18:50:51.604 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 11s [id=19d966df-ef2b-4cdf-8cd3-e53e17cf39c1] 2025-04-01 18:50:51.615432 | orchestrator | 18:50:51.614 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-01 18:50:51.616635 | orchestrator | 18:50:51.616 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=8c239a65-4acd-4227-a388-0863223ee363] 2025-04-01 18:50:51.621751 | orchestrator | 18:50:51.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-01 18:50:51.628295 | orchestrator | 18:50:51.628 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=9c75ca34-724f-40ca-ac18-00bb9ef52260] 2025-04-01 18:50:51.634092 | orchestrator | 18:50:51.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-01 18:50:51.635812 | orchestrator | 18:50:51.635 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03] 2025-04-01 18:50:51.641412 | orchestrator | 18:50:51.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-01 18:50:51.686788 | orchestrator | 18:50:51.686 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=ef05168f-fb35-4f94-a2bc-4c842347eaa7] 2025-04-01 18:50:51.693178 | orchestrator | 18:50:51.693 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-01 18:50:51.775917 | orchestrator | 18:50:51.775 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=75999ceb-501f-420c-8b43-800350cfb103] 2025-04-01 18:50:51.781500 | orchestrator | 18:50:51.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-01 18:50:52.334077 | orchestrator | 18:50:52.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-01 18:50:52.509739 | orchestrator | 18:50:52.509 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=fe75b96a-3751-4707-9d8f-14bf0ebec7cf] 2025-04-01 18:50:52.519733 | orchestrator | 18:50:52.519 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-01 18:50:56.865953 | orchestrator | 18:50:56.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-01 18:50:57.042523 | orchestrator | 18:50:57.042 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=146c5a64-e236-4e9d-aba9-c694e16f981b] 2025-04-01 18:50:57.051145 | orchestrator | 18:50:57.050 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-01 18:51:01.588160 | orchestrator | 18:51:01.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-01 18:51:01.590344 | orchestrator | 18:51:01.590 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-01 18:51:01.616792 | orchestrator | 18:51:01.616 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-01 18:51:01.623013 | orchestrator | 18:51:01.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-01 18:51:01.634291 | orchestrator | 18:51:01.634 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-01 18:51:01.642546 | orchestrator | 18:51:01.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-01 18:51:01.693902 | orchestrator | 18:51:01.693 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-01 18:51:01.768615 | orchestrator | 18:51:01.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=9ce3d96a-7a14-4bc8-9f00-60b125950ef0] 2025-04-01 18:51:01.777136 | orchestrator | 18:51:01.776 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=d4e136df-1ed3-4293-9f31-166cbf2340f4] 2025-04-01 18:51:01.781309 | orchestrator | 18:51:01.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-01 18:51:01.781796 | orchestrator | 18:51:01.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-01 18:51:01.791270 | orchestrator | 18:51:01.791 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-01 18:51:01.838289 | orchestrator | 18:51:01.837 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=063ac280-b641-4001-8d36-5300696e4f72] 2025-04-01 18:51:01.853418 | orchestrator | 18:51:01.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-01 18:51:01.857575 | orchestrator | 18:51:01.857 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=351e2311-cc99-4b1d-b7f8-98ba0727423c] 2025-04-01 18:51:01.859769 | orchestrator | 18:51:01.859 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c] 2025-04-01 18:51:01.863620 | orchestrator | 18:51:01.863 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-01 18:51:01.865306 | orchestrator | 18:51:01.865 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-01 18:51:01.871187 | orchestrator | 18:51:01.870 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=ab8dfbad-f338-4768-a4e7-f4b333b69279] 2025-04-01 18:51:01.884295 | orchestrator | 18:51:01.884 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-01 18:51:01.887881 | orchestrator | 18:51:01.887 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=3b8b6537-11b2-4db3-b62a-18312f3aa6f8] 2025-04-01 18:51:01.888514 | orchestrator | 18:51:01.888 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=24efa0b79fc615833f5e235092deb94915599d6b] 2025-04-01 18:51:01.900449 | orchestrator | 18:51:01.900 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-01 18:51:01.901524 | orchestrator | 18:51:01.901 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-01 18:51:01.908996 | orchestrator | 18:51:01.908 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=c026b6a685000c522f4bbe97796042e821d373ae] 2025-04-01 18:51:01.966994 | orchestrator | 18:51:01.966 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=5fefcc5b-05b8-4046-aae3-ed6d9b3b967c] 2025-04-01 18:51:02.521223 | orchestrator | 18:51:02.520 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-01 18:51:02.884619 | orchestrator | 18:51:02.884 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=a4fd7d26-ea4c-418a-8803-23ebe44f168e] 2025-04-01 18:51:07.052283 | orchestrator | 18:51:07.051 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-01 18:51:07.360662 | orchestrator | 18:51:07.360 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=ae18c0ec-da2f-45ed-b23b-40c75813e891] 2025-04-01 18:51:08.440044 | orchestrator | 18:51:08.439 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=88c22c67-c046-4399-b39d-d32b48f9cabe] 2025-04-01 18:51:08.659411 | orchestrator | 18:51:08.446 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-01 18:51:11.782296 | orchestrator | 18:51:11.781 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-01 18:51:11.793447 | orchestrator | 18:51:11.793 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-01 18:51:11.853650 | orchestrator | 18:51:11.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-01 18:51:11.864952 | orchestrator | 18:51:11.864 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-01 18:51:11.866220 | orchestrator | 18:51:11.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-01 18:51:12.115157 | orchestrator | 18:51:12.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=d5ebd160-46c7-4645-bffc-e57cafdc3124] 2025-04-01 18:51:12.166336 | orchestrator | 18:51:12.166 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=9f2a8a05-1f0e-4612-894f-941da9ace46e] 2025-04-01 18:51:12.205746 | orchestrator | 18:51:12.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=32d92a12-18a7-4405-92c1-c5a976ec5319] 2025-04-01 18:51:12.231331 | orchestrator | 18:51:12.231 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=250a6be6-ee42-4653-b909-5b3edf0d7432] 2025-04-01 18:51:12.261191 | orchestrator | 18:51:12.260 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=b705f53a-fcc8-4831-99c5-1b34182e7d6c] 2025-04-01 18:51:15.079510 | orchestrator | 18:51:15.079 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=e795c19e-00c3-4fb4-99e4-6ca5678679d1] 2025-04-01 18:51:15.085230 | orchestrator | 18:51:15.084 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-01 18:51:15.086557 | orchestrator | 18:51:15.086 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-01 18:51:15.087456 | orchestrator | 18:51:15.087 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-01 18:51:15.230306 | orchestrator | 18:51:15.229 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=3d41de39-44a6-44d2-a96b-0589fee7e0d6] 2025-04-01 18:51:15.238253 | orchestrator | 18:51:15.237 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-01 18:51:15.239444 | orchestrator | 18:51:15.239 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-01 18:51:15.243371 | orchestrator | 18:51:15.243 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=8491e580-b214-4fe3-93ee-31978e36d401] 2025-04-01 18:51:15.243717 | orchestrator | 18:51:15.243 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-01 18:51:15.245414 | orchestrator | 18:51:15.245 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-01 18:51:15.249665 | orchestrator | 18:51:15.249 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-01 18:51:15.250825 | orchestrator | 18:51:15.250 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-01 18:51:15.257790 | orchestrator | 18:51:15.257 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-01 18:51:15.259056 | orchestrator | 18:51:15.258 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-01 18:51:15.260262 | orchestrator | 18:51:15.260 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-01 18:51:15.379529 | orchestrator | 18:51:15.379 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=49bf93db-9abc-4d93-8778-a7bcd5c94ca6] 2025-04-01 18:51:15.385916 | orchestrator | 18:51:15.385 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-01 18:51:15.430759 | orchestrator | 18:51:15.430 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=3013f548-3907-4690-b9f0-c69f29987246] 2025-04-01 18:51:15.439888 | orchestrator | 18:51:15.439 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-01 18:51:15.534621 | orchestrator | 18:51:15.534 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=649a32a2-15a7-4095-adfc-001fc7c4811e] 2025-04-01 18:51:15.548933 | orchestrator | 18:51:15.548 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-01 18:51:15.561459 | orchestrator | 18:51:15.561 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=b7e61622-84cb-416c-8523-c9bd33a7d472] 2025-04-01 18:51:15.567802 | orchestrator | 18:51:15.567 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-01 18:51:15.827992 | orchestrator | 18:51:15.827 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=a5f02dee-177d-42f8-9f15-c53d3abaa1b7] 2025-04-01 18:51:15.839272 | orchestrator | 18:51:15.839 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=12236236-ca4f-4658-ae1f-68ab30f854dc] 2025-04-01 18:51:15.841603 | orchestrator | 18:51:15.841 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-01 18:51:15.849669 | orchestrator | 18:51:15.849 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-01 18:51:15.956928 | orchestrator | 18:51:15.956 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=fd792de9-9960-491d-9fc7-8d4b797ba938] 2025-04-01 18:51:15.967755 | orchestrator | 18:51:15.967 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-01 18:51:15.990715 | orchestrator | 18:51:15.990 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=bfd9c4c8-6410-4237-93ea-cea856c1b626] 2025-04-01 18:51:16.106725 | orchestrator | 18:51:16.106 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=14917913-a501-4cb3-abc3-6ace2f1aa24b] 2025-04-01 18:51:20.865322 | orchestrator | 18:51:20.864 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=90d45961-3234-4144-9ff4-b774e295aba5] 2025-04-01 18:51:20.953182 | orchestrator | 18:51:20.952 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=cc9abd11-34b4-48b4-aea7-8af23e85e14e] 2025-04-01 18:51:21.230690 | orchestrator | 18:51:21.230 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=f324a142-67ca-48f8-80ca-db6f1935fdcf] 2025-04-01 18:51:21.286193 | orchestrator | 18:51:21.285 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=72c838fc-43e2-4295-97b0-a2652d16c0e9] 2025-04-01 18:51:21.368723 | orchestrator | 18:51:21.368 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=c26adcb7-329c-4f93-babe-13501b7a868a] 2025-04-01 18:51:21.645775 | orchestrator | 18:51:21.645 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=3fb2f087-e4fd-4afe-b1e0-baec76efda3a] 2025-04-01 18:51:21.999206 | orchestrator | 18:51:21.998 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=dd62cb0f-7b78-47d7-81b1-c8e1faabd157] 2025-04-01 18:51:22.757690 | orchestrator | 18:51:22.757 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=92ad1d22-a9fd-44f7-9d32-f1f88ecac02b] 2025-04-01 18:51:22.809716 | orchestrator | 18:51:22.809 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-01 18:51:22.811646 | orchestrator | 18:51:22.811 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-01 18:51:22.838205 | orchestrator | 18:51:22.838 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-01 18:51:22.838246 | orchestrator | 18:51:22.838 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-01 18:51:22.838280 | orchestrator | 18:51:22.838 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-01 18:51:22.838290 | orchestrator | 18:51:22.838 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-01 18:51:22.851686 | orchestrator | 18:51:22.850 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-01 18:51:31.469585 | orchestrator | 18:51:31.469 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 8s [id=24f30b70-dea2-4e7b-b632-bf4dbd5502b4] 2025-04-01 18:51:31.482631 | orchestrator | 18:51:31.481 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-01 18:51:31.487244 | orchestrator | 18:51:31.487 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-01 18:51:31.489672 | orchestrator | 18:51:31.489 STDOUT terraform: local_file.inventory: Creating... 2025-04-01 18:51:31.492879 | orchestrator | 18:51:31.492 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=0185d2f6af304f542bf7e78ce24a63e9321962a9] 2025-04-01 18:51:31.500521 | orchestrator | 18:51:31.500 STDOUT terraform: local_file.inventory: Creation complete after 1s [id=6ec2476866f8024df4acf14331c9559555054dfc] 2025-04-01 18:51:32.103669 | orchestrator | 18:51:32.103 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=24f30b70-dea2-4e7b-b632-bf4dbd5502b4] 2025-04-01 18:51:32.812221 | orchestrator | 18:51:32.811 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-01 18:51:32.834414 | orchestrator | 18:51:32.834 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-01 18:51:32.836614 | orchestrator | 18:51:32.836 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-01 18:51:32.836734 | orchestrator | 18:51:32.836 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-01 18:51:32.840945 | orchestrator | 18:51:32.840 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-01 18:51:32.851281 | orchestrator | 18:51:32.851 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-01 18:51:42.812661 | orchestrator | 18:51:42.812 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-01 18:51:42.835118 | orchestrator | 18:51:42.834 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-01 18:51:42.837362 | orchestrator | 18:51:42.837 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-01 18:51:42.837444 | orchestrator | 18:51:42.837 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-01 18:51:42.841598 | orchestrator | 18:51:42.841 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-01 18:51:42.852197 | orchestrator | 18:51:42.852 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-01 18:51:43.152927 | orchestrator | 18:51:43.152 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=f7417d04-2b15-4e75-a3b9-343c73b3491f] 2025-04-01 18:51:52.814596 | orchestrator | 18:51:52.814 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-04-01 18:51:52.835688 | orchestrator | 18:51:52.835 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-04-01 18:51:52.837967 | orchestrator | 18:51:52.837 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-04-01 18:51:52.842219 | orchestrator | 18:51:52.842 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-04-01 18:51:52.852659 | orchestrator | 18:51:52.852 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-04-01 18:51:53.446553 | orchestrator | 18:51:53.446 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=ccf00b5a-a574-4313-8c37-a734ce06f8b8] 2025-04-01 18:51:53.477004 | orchestrator | 18:51:53.476 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=453cded1-8e1e-4d7e-88a4-99887cf09090] 2025-04-01 18:52:02.837046 | orchestrator | 18:52:02.836 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-04-01 18:52:02.842382 | orchestrator | 18:52:02.842 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-04-01 18:52:02.853674 | orchestrator | 18:52:02.853 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-04-01 18:52:03.540253 | orchestrator | 18:52:03.539 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=2148ce4d-883f-4f30-a61a-0b41a61cc27e] 2025-04-01 18:52:03.561571 | orchestrator | 18:52:03.561 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=fe3074e7-6b63-4112-b45b-0af676aaa7a1] 2025-04-01 18:52:03.575612 | orchestrator | 18:52:03.575 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=5fc6f6c3-bc6e-498b-88b7-bda7d754ce91] 2025-04-01 18:52:03.606620 | orchestrator | 18:52:03.606 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-01 18:52:03.608716 | orchestrator | 18:52:03.608 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1924479195881608852] 2025-04-01 18:52:03.612935 | orchestrator | 18:52:03.612 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-01 18:52:03.612987 | orchestrator | 18:52:03.612 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-01 18:52:03.615577 | orchestrator | 18:52:03.615 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-01 18:52:03.616600 | orchestrator | 18:52:03.616 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-01 18:52:03.620611 | orchestrator | 18:52:03.620 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-01 18:52:03.622661 | orchestrator | 18:52:03.622 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-01 18:52:03.635109 | orchestrator | 18:52:03.634 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-01 18:52:03.635955 | orchestrator | 18:52:03.635 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-01 18:52:03.641163 | orchestrator | 18:52:03.641 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-01 18:52:03.644849 | orchestrator | 18:52:03.644 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-01 18:52:08.927546 | orchestrator | 18:52:08.927 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=fe3074e7-6b63-4112-b45b-0af676aaa7a1/d4e136df-1ed3-4293-9f31-166cbf2340f4] 2025-04-01 18:52:08.929712 | orchestrator | 18:52:08.929 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=ccf00b5a-a574-4313-8c37-a734ce06f8b8/9ce3d96a-7a14-4bc8-9f00-60b125950ef0] 2025-04-01 18:52:08.942313 | orchestrator | 18:52:08.942 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-01 18:52:08.943403 | orchestrator | 18:52:08.943 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-01 18:52:08.958190 | orchestrator | 18:52:08.957 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=5fc6f6c3-bc6e-498b-88b7-bda7d754ce91/75999ceb-501f-420c-8b43-800350cfb103] 2025-04-01 18:52:08.968043 | orchestrator | 18:52:08.967 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-01 18:52:09.009524 | orchestrator | 18:52:09.009 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=fe3074e7-6b63-4112-b45b-0af676aaa7a1/9c75ca34-724f-40ca-ac18-00bb9ef52260] 2025-04-01 18:52:09.025036 | orchestrator | 18:52:09.024 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=ccf00b5a-a574-4313-8c37-a734ce06f8b8/8c239a65-4acd-4227-a388-0863223ee363] 2025-04-01 18:52:09.027394 | orchestrator | 18:52:09.027 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-01 18:52:09.035867 | orchestrator | 18:52:09.035 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=5fc6f6c3-bc6e-498b-88b7-bda7d754ce91/a9b8ece6-9486-4a7c-9bf5-40c217f02d2d] 2025-04-01 18:52:09.038820 | orchestrator | 18:52:09.038 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-01 18:52:09.056693 | orchestrator | 18:52:09.056 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-01 18:52:09.078574 | orchestrator | 18:52:09.078 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=2148ce4d-883f-4f30-a61a-0b41a61cc27e/3b8b6537-11b2-4db3-b62a-18312f3aa6f8] 2025-04-01 18:52:09.087745 | orchestrator | 18:52:09.085 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-01 18:52:09.172755 | orchestrator | 18:52:09.172 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=fe3074e7-6b63-4112-b45b-0af676aaa7a1/fe75b96a-3751-4707-9d8f-14bf0ebec7cf] 2025-04-01 18:52:09.196694 | orchestrator | 18:52:09.192 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-01 18:52:09.214364 | orchestrator | 18:52:09.196 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=453cded1-8e1e-4d7e-88a4-99887cf09090/dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03] 2025-04-01 18:52:09.214422 | orchestrator | 18:52:09.214 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-01 18:52:09.457226 | orchestrator | 18:52:09.456 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=453cded1-8e1e-4d7e-88a4-99887cf09090/063ac280-b641-4001-8d36-5300696e4f72] 2025-04-01 18:52:14.250314 | orchestrator | 18:52:14.249 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=f7417d04-2b15-4e75-a3b9-343c73b3491f/f219ed29-ae42-40c1-a413-2af7dcf44905] 2025-04-01 18:52:14.269817 | orchestrator | 18:52:14.269 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=ccf00b5a-a574-4313-8c37-a734ce06f8b8/146c5a64-e236-4e9d-aba9-c694e16f981b] 2025-04-01 18:52:14.279597 | orchestrator | 18:52:14.279 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=2148ce4d-883f-4f30-a61a-0b41a61cc27e/e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c] 2025-04-01 18:52:14.336902 | orchestrator | 18:52:14.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=f7417d04-2b15-4e75-a3b9-343c73b3491f/351e2311-cc99-4b1d-b7f8-98ba0727423c] 2025-04-01 18:52:14.401749 | orchestrator | 18:52:14.401 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=453cded1-8e1e-4d7e-88a4-99887cf09090/19d966df-ef2b-4cdf-8cd3-e53e17cf39c1] 2025-04-01 18:52:14.427247 | orchestrator | 18:52:14.426 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=f7417d04-2b15-4e75-a3b9-343c73b3491f/5fefcc5b-05b8-4046-aae3-ed6d9b3b967c] 2025-04-01 18:52:14.483958 | orchestrator | 18:52:14.483 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=2148ce4d-883f-4f30-a61a-0b41a61cc27e/ef05168f-fb35-4f94-a2bc-4c842347eaa7] 2025-04-01 18:52:14.552364 | orchestrator | 18:52:14.551 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 6s [id=5fc6f6c3-bc6e-498b-88b7-bda7d754ce91/ab8dfbad-f338-4768-a4e7-f4b333b69279] 2025-04-01 18:52:19.216232 | orchestrator | 18:52:19.215 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-01 18:52:29.216937 | orchestrator | 18:52:29.216 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-01 18:52:29.716074 | orchestrator | 18:52:29.715 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=9d8b74cf-ec1e-4c96-a232-6062c6e150ec] 2025-04-01 18:52:29.746452 | orchestrator | 18:52:29.746 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-01 18:52:29.746537 | orchestrator | 18:52:29.746 STDOUT terraform: Outputs: 2025-04-01 18:52:29.746553 | orchestrator | 18:52:29.746 STDOUT terraform: manager_address = 2025-04-01 18:52:29.754952 | orchestrator | 18:52:29.746 STDOUT terraform: private_key = 2025-04-01 18:52:40.217408 | orchestrator | changed 2025-04-01 18:52:40.257273 | 2025-04-01 18:52:40.257393 | TASK [Fetch manager address] 2025-04-01 18:52:40.617750 | orchestrator | ok 2025-04-01 18:52:40.628079 | 2025-04-01 18:52:40.628207 | TASK [Set manager_host address] 2025-04-01 18:52:40.727978 | orchestrator | ok 2025-04-01 18:52:40.738446 | 2025-04-01 18:52:40.738538 | LOOP [Update ansible collections] 2025-04-01 18:52:41.465290 | orchestrator | changed 2025-04-01 18:52:42.172793 | orchestrator | changed 2025-04-01 18:52:42.199626 | 2025-04-01 18:52:42.199770 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-01 18:52:52.730804 | orchestrator | ok 2025-04-01 18:52:52.743361 | 2025-04-01 18:52:52.743478 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-01 18:53:52.796801 | orchestrator | ok 2025-04-01 18:53:52.809302 | 2025-04-01 18:53:52.809432 | TASK [Fetch manager ssh hostkey] 2025-04-01 18:53:53.850204 | orchestrator | Output suppressed because no_log was given 2025-04-01 18:53:53.872322 | 2025-04-01 18:53:53.872518 | TASK [Get ssh keypair from terraform environment] 2025-04-01 18:53:54.447902 | orchestrator | changed 2025-04-01 18:53:54.463548 | 2025-04-01 18:53:54.463704 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-01 18:53:54.511693 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-01 18:53:54.521673 | 2025-04-01 18:53:54.521780 | TASK [Run manager part 0] 2025-04-01 18:53:55.330009 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-01 18:53:55.374124 | orchestrator | 2025-04-01 18:53:57.357466 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-01 18:53:57.357571 | orchestrator | 2025-04-01 18:53:57.357594 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-01 18:53:57.357613 | orchestrator | ok: [testbed-manager] 2025-04-01 18:53:59.227729 | orchestrator | 2025-04-01 18:53:59.227906 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-01 18:53:59.227922 | orchestrator | 2025-04-01 18:53:59.227928 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 18:53:59.227946 | orchestrator | ok: [testbed-manager] 2025-04-01 18:53:59.877988 | orchestrator | 2025-04-01 18:53:59.878134 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-01 18:53:59.878173 | orchestrator | ok: [testbed-manager] 2025-04-01 18:53:59.918326 | orchestrator | 2025-04-01 18:53:59.918395 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-01 18:53:59.918413 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:53:59.943444 | orchestrator | 2025-04-01 18:53:59.943525 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-01 18:53:59.943544 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:53:59.971115 | orchestrator | 2025-04-01 18:53:59.971190 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-01 18:53:59.971219 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:54:00.004575 | orchestrator | 2025-04-01 18:54:00.004632 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-01 18:54:00.004649 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:54:00.029149 | orchestrator | 2025-04-01 18:54:00.029201 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-01 18:54:00.029217 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:54:00.061129 | orchestrator | 2025-04-01 18:54:00.061184 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-01 18:54:00.061212 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:54:00.091269 | orchestrator | 2025-04-01 18:54:00.091317 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-01 18:54:00.091334 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:54:00.860812 | orchestrator | 2025-04-01 18:54:00.860888 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-01 18:54:00.860905 | orchestrator | changed: [testbed-manager] 2025-04-01 18:56:51.457693 | orchestrator | 2025-04-01 18:56:51.457795 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-01 18:56:51.457830 | orchestrator | changed: [testbed-manager] 2025-04-01 18:58:15.557763 | orchestrator | 2025-04-01 18:58:15.557877 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-01 18:58:15.557909 | orchestrator | changed: [testbed-manager] 2025-04-01 18:58:37.492782 | orchestrator | 2025-04-01 18:58:37.492889 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-01 18:58:37.492923 | orchestrator | changed: [testbed-manager] 2025-04-01 18:58:47.194552 | orchestrator | 2025-04-01 18:58:47.194661 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-01 18:58:47.194698 | orchestrator | changed: [testbed-manager] 2025-04-01 18:58:47.242256 | orchestrator | 2025-04-01 18:58:47.242316 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-01 18:58:47.242344 | orchestrator | ok: [testbed-manager] 2025-04-01 18:58:48.098755 | orchestrator | 2025-04-01 18:58:48.098846 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-01 18:58:48.098876 | orchestrator | ok: [testbed-manager] 2025-04-01 18:58:48.878779 | orchestrator | 2025-04-01 18:58:48.878870 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-01 18:58:48.878907 | orchestrator | changed: [testbed-manager] 2025-04-01 18:58:55.843836 | orchestrator | 2025-04-01 18:58:55.843936 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-01 18:58:55.843973 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:02.866346 | orchestrator | 2025-04-01 18:59:02.866455 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-01 18:59:02.866504 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:05.984236 | orchestrator | 2025-04-01 18:59:05.984383 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-01 18:59:05.984421 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:07.943634 | orchestrator | 2025-04-01 18:59:07.943729 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-01 18:59:07.943762 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:09.162837 | orchestrator | 2025-04-01 18:59:09.162934 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-01 18:59:09.162968 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-01 18:59:09.204726 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-01 18:59:09.204807 | orchestrator | 2025-04-01 18:59:09.204826 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-01 18:59:09.204849 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-01 18:59:12.724763 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-01 18:59:12.724850 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-01 18:59:12.724866 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-01 18:59:12.724891 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-01 18:59:13.308106 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-01 18:59:13.308213 | orchestrator | 2025-04-01 18:59:13.308234 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-01 18:59:13.308265 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:32.939178 | orchestrator | 2025-04-01 18:59:32.939233 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-01 18:59:32.939257 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-01 18:59:35.421430 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-01 18:59:35.421478 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-01 18:59:35.421485 | orchestrator | 2025-04-01 18:59:35.421492 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-01 18:59:35.421505 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-01 18:59:36.904550 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-01 18:59:36.904648 | orchestrator | 2025-04-01 18:59:36.904668 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-01 18:59:36.904699 | orchestrator | 2025-04-01 18:59:36.904715 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 18:59:36.904746 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:36.951317 | orchestrator | 2025-04-01 18:59:36.951367 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-01 18:59:36.951382 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:37.026800 | orchestrator | 2025-04-01 18:59:37.026860 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-01 18:59:37.026884 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:37.816347 | orchestrator | 2025-04-01 18:59:37.816438 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-01 18:59:37.816473 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:38.572249 | orchestrator | 2025-04-01 18:59:38.572377 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-01 18:59:38.572432 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:39.980158 | orchestrator | 2025-04-01 18:59:39.980339 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-01 18:59:39.980369 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-01 18:59:41.370080 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-01 18:59:41.370137 | orchestrator | 2025-04-01 18:59:41.370149 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-01 18:59:41.370167 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:43.195794 | orchestrator | 2025-04-01 18:59:43.195884 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-01 18:59:43.195916 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 18:59:43.787953 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-01 18:59:43.788046 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-01 18:59:43.788066 | orchestrator | 2025-04-01 18:59:43.788082 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-01 18:59:43.788112 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:43.857044 | orchestrator | 2025-04-01 18:59:43.857125 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-01 18:59:43.857158 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:44.752368 | orchestrator | 2025-04-01 18:59:44.752448 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-01 18:59:44.752473 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 18:59:44.789499 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:44.789575 | orchestrator | 2025-04-01 18:59:44.789586 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-01 18:59:44.789605 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:44.819829 | orchestrator | 2025-04-01 18:59:44.819934 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-01 18:59:44.819971 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:44.860046 | orchestrator | 2025-04-01 18:59:44.860100 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-01 18:59:44.860117 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:44.908265 | orchestrator | 2025-04-01 18:59:44.908318 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-01 18:59:44.908333 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:45.676028 | orchestrator | 2025-04-01 18:59:45.676121 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-01 18:59:45.676156 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:47.125369 | orchestrator | 2025-04-01 18:59:47.125410 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-01 18:59:47.125417 | orchestrator | 2025-04-01 18:59:47.125423 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 18:59:47.125434 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:48.179265 | orchestrator | 2025-04-01 18:59:48.179551 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-01 18:59:48.179591 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:48.293238 | orchestrator | 2025-04-01 18:59:48.293478 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 18:59:48.293505 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-01 18:59:48.293549 | orchestrator | 2025-04-01 18:59:48.766335 | orchestrator | changed 2025-04-01 18:59:48.785255 | 2025-04-01 18:59:48.785388 | TASK [Point out that the log in on the manager is now possible] 2025-04-01 18:59:48.836588 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-01 18:59:48.847509 | 2025-04-01 18:59:48.847618 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-01 18:59:48.893087 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-01 18:59:48.902110 | 2025-04-01 18:59:48.902234 | TASK [Run manager part 1 + 2] 2025-04-01 18:59:49.766993 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-01 18:59:49.827125 | orchestrator | 2025-04-01 18:59:52.351383 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-01 18:59:52.351431 | orchestrator | 2025-04-01 18:59:52.351452 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 18:59:52.351469 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:52.386240 | orchestrator | 2025-04-01 18:59:52.386298 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-01 18:59:52.386318 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:52.418334 | orchestrator | 2025-04-01 18:59:52.418393 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-01 18:59:52.418416 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:52.451210 | orchestrator | 2025-04-01 18:59:52.451275 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-01 18:59:52.451299 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:52.520686 | orchestrator | 2025-04-01 18:59:52.520734 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-01 18:59:52.520749 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:52.592412 | orchestrator | 2025-04-01 18:59:52.592493 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-01 18:59:52.592545 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:52.636357 | orchestrator | 2025-04-01 18:59:52.636431 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-01 18:59:52.636467 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-01 18:59:53.385380 | orchestrator | 2025-04-01 18:59:53.385461 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-01 18:59:53.385493 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:53.430485 | orchestrator | 2025-04-01 18:59:53.430565 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-01 18:59:53.430596 | orchestrator | skipping: [testbed-manager] 2025-04-01 18:59:54.781349 | orchestrator | 2025-04-01 18:59:54.781439 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-01 18:59:54.781483 | orchestrator | changed: [testbed-manager] 2025-04-01 18:59:55.357966 | orchestrator | 2025-04-01 18:59:55.358107 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-01 18:59:55.358142 | orchestrator | ok: [testbed-manager] 2025-04-01 18:59:56.488585 | orchestrator | 2025-04-01 18:59:56.488628 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-01 18:59:56.488640 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:10.480729 | orchestrator | 2025-04-01 19:00:10.480802 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-01 19:00:10.480830 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:11.119084 | orchestrator | 2025-04-01 19:00:11.119174 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-01 19:00:11.119205 | orchestrator | ok: [testbed-manager] 2025-04-01 19:00:11.166832 | orchestrator | 2025-04-01 19:00:11.166891 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-01 19:00:11.166917 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:00:12.186880 | orchestrator | 2025-04-01 19:00:12.186991 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-01 19:00:12.187025 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:13.228859 | orchestrator | 2025-04-01 19:00:13.228987 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-01 19:00:13.229024 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:13.825822 | orchestrator | 2025-04-01 19:00:13.825929 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-01 19:00:13.825964 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:13.865587 | orchestrator | 2025-04-01 19:00:13.865648 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-01 19:00:13.865676 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-01 19:00:16.240608 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-01 19:00:16.240747 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-01 19:00:16.240766 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-01 19:00:16.240798 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:25.997404 | orchestrator | 2025-04-01 19:00:25.997469 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-01 19:00:25.997485 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-01 19:00:27.094309 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-01 19:00:27.094408 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-01 19:00:27.094428 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-01 19:00:27.094445 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-01 19:00:27.094460 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-01 19:00:27.094475 | orchestrator | 2025-04-01 19:00:27.094489 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-01 19:00:27.094560 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:27.154166 | orchestrator | 2025-04-01 19:00:27.154275 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-01 19:00:27.154308 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:00:30.404164 | orchestrator | 2025-04-01 19:00:30.404241 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-01 19:00:30.404267 | orchestrator | changed: [testbed-manager] 2025-04-01 19:00:30.447574 | orchestrator | 2025-04-01 19:00:30.447641 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-01 19:00:30.447665 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:02:16.189301 | orchestrator | 2025-04-01 19:02:16.189417 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-01 19:02:16.189454 | orchestrator | changed: [testbed-manager] 2025-04-01 19:02:17.414189 | orchestrator | 2025-04-01 19:02:17.414238 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-01 19:02:17.414254 | orchestrator | ok: [testbed-manager] 2025-04-01 19:02:17.513018 | orchestrator | 2025-04-01 19:02:17.513149 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:02:17.513171 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-01 19:02:17.513188 | orchestrator | 2025-04-01 19:02:17.578572 | orchestrator | changed 2025-04-01 19:02:17.590277 | 2025-04-01 19:02:17.590391 | TASK [Reboot manager] 2025-04-01 19:02:19.130829 | orchestrator | changed 2025-04-01 19:02:19.148718 | 2025-04-01 19:02:19.148853 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-01 19:02:35.557947 | orchestrator | ok 2025-04-01 19:02:35.569036 | 2025-04-01 19:02:35.569129 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-01 19:03:35.619154 | orchestrator | ok 2025-04-01 19:03:35.629689 | 2025-04-01 19:03:35.629788 | TASK [Deploy manager + bootstrap nodes] 2025-04-01 19:03:38.317442 | orchestrator | 2025-04-01 19:03:38.321406 | orchestrator | # DEPLOY MANAGER 2025-04-01 19:03:38.321443 | orchestrator | 2025-04-01 19:03:38.321460 | orchestrator | + set -e 2025-04-01 19:03:38.321506 | orchestrator | + echo 2025-04-01 19:03:38.321525 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-01 19:03:38.321542 | orchestrator | + echo 2025-04-01 19:03:38.321591 | orchestrator | + cat /opt/manager-vars.sh 2025-04-01 19:03:38.321627 | orchestrator | export NUMBER_OF_NODES=6 2025-04-01 19:03:38.322773 | orchestrator | 2025-04-01 19:03:38.322797 | orchestrator | export CEPH_VERSION=quincy 2025-04-01 19:03:38.322812 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-01 19:03:38.322826 | orchestrator | export MANAGER_VERSION=8.1.0 2025-04-01 19:03:38.322841 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-01 19:03:38.322855 | orchestrator | 2025-04-01 19:03:38.322871 | orchestrator | export ARA=false 2025-04-01 19:03:38.322885 | orchestrator | export TEMPEST=false 2025-04-01 19:03:38.322900 | orchestrator | export IS_ZUUL=true 2025-04-01 19:03:38.322914 | orchestrator | 2025-04-01 19:03:38.322928 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:03:38.322944 | orchestrator | export EXTERNAL_API=false 2025-04-01 19:03:38.322958 | orchestrator | 2025-04-01 19:03:38.322980 | orchestrator | export IMAGE_USER=ubuntu 2025-04-01 19:03:38.322994 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-01 19:03:38.323009 | orchestrator | 2025-04-01 19:03:38.323023 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-01 19:03:38.323038 | orchestrator | 2025-04-01 19:03:38.323052 | orchestrator | + echo 2025-04-01 19:03:38.323074 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-01 19:03:38.323095 | orchestrator | ++ export INTERACTIVE=false 2025-04-01 19:03:38.323110 | orchestrator | ++ INTERACTIVE=false 2025-04-01 19:03:38.323132 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-01 19:03:38.323155 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-01 19:03:38.323170 | orchestrator | + source /opt/manager-vars.sh 2025-04-01 19:03:38.323191 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-01 19:03:38.323205 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-01 19:03:38.323220 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-01 19:03:38.323234 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-01 19:03:38.323247 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-01 19:03:38.323261 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-01 19:03:38.323283 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-01 19:03:38.323297 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-01 19:03:38.323311 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-01 19:03:38.323325 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-01 19:03:38.323339 | orchestrator | ++ export ARA=false 2025-04-01 19:03:38.323357 | orchestrator | ++ ARA=false 2025-04-01 19:03:38.381925 | orchestrator | ++ export TEMPEST=false 2025-04-01 19:03:38.381950 | orchestrator | ++ TEMPEST=false 2025-04-01 19:03:38.381964 | orchestrator | ++ export IS_ZUUL=true 2025-04-01 19:03:38.381978 | orchestrator | ++ IS_ZUUL=true 2025-04-01 19:03:38.381992 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:03:38.382006 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:03:38.382082 | orchestrator | ++ export EXTERNAL_API=false 2025-04-01 19:03:38.382098 | orchestrator | ++ EXTERNAL_API=false 2025-04-01 19:03:38.382112 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-01 19:03:38.382126 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-01 19:03:38.382140 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-01 19:03:38.382154 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-01 19:03:38.382172 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-01 19:03:38.382186 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-01 19:03:38.382200 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-01 19:03:38.382229 | orchestrator | + docker version 2025-04-01 19:03:38.661703 | orchestrator | Client: Docker Engine - Community 2025-04-01 19:03:38.665998 | orchestrator | Version: 26.1.4 2025-04-01 19:03:38.666178 | orchestrator | API version: 1.45 2025-04-01 19:03:38.666198 | orchestrator | Go version: go1.21.11 2025-04-01 19:03:38.666213 | orchestrator | Git commit: 5650f9b 2025-04-01 19:03:38.666227 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-01 19:03:38.666243 | orchestrator | OS/Arch: linux/amd64 2025-04-01 19:03:38.666257 | orchestrator | Context: default 2025-04-01 19:03:38.666271 | orchestrator | 2025-04-01 19:03:38.666285 | orchestrator | Server: Docker Engine - Community 2025-04-01 19:03:38.666300 | orchestrator | Engine: 2025-04-01 19:03:38.666314 | orchestrator | Version: 26.1.4 2025-04-01 19:03:38.666328 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-04-01 19:03:38.666342 | orchestrator | Go version: go1.21.11 2025-04-01 19:03:38.666358 | orchestrator | Git commit: de5c9cf 2025-04-01 19:03:38.666402 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-01 19:03:38.666417 | orchestrator | OS/Arch: linux/amd64 2025-04-01 19:03:38.666431 | orchestrator | Experimental: false 2025-04-01 19:03:38.666445 | orchestrator | containerd: 2025-04-01 19:03:38.666459 | orchestrator | Version: 1.7.27 2025-04-01 19:03:38.666473 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-01 19:03:38.666489 | orchestrator | runc: 2025-04-01 19:03:38.666503 | orchestrator | Version: 1.2.5 2025-04-01 19:03:38.666517 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-01 19:03:38.666531 | orchestrator | docker-init: 2025-04-01 19:03:38.666577 | orchestrator | Version: 0.19.0 2025-04-01 19:03:38.666594 | orchestrator | GitCommit: de40ad0 2025-04-01 19:03:38.666626 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-01 19:03:38.678275 | orchestrator | + set -e 2025-04-01 19:03:38.678315 | orchestrator | + source /opt/manager-vars.sh 2025-04-01 19:03:38.678333 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-01 19:03:38.678354 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-01 19:03:38.678368 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-01 19:03:38.678382 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-01 19:03:38.678396 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-01 19:03:38.678411 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-01 19:03:38.678425 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-01 19:03:38.678446 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-01 19:03:38.678460 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-01 19:03:38.678474 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-01 19:03:38.678488 | orchestrator | ++ export ARA=false 2025-04-01 19:03:38.678502 | orchestrator | ++ ARA=false 2025-04-01 19:03:38.678516 | orchestrator | ++ export TEMPEST=false 2025-04-01 19:03:38.678530 | orchestrator | ++ TEMPEST=false 2025-04-01 19:03:38.678544 | orchestrator | ++ export IS_ZUUL=true 2025-04-01 19:03:38.678581 | orchestrator | ++ IS_ZUUL=true 2025-04-01 19:03:38.678601 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:03:38.679021 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:03:38.679040 | orchestrator | ++ export EXTERNAL_API=false 2025-04-01 19:03:38.679071 | orchestrator | ++ EXTERNAL_API=false 2025-04-01 19:03:38.679086 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-01 19:03:38.679100 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-01 19:03:38.679119 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-01 19:03:38.679134 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-01 19:03:38.679148 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-01 19:03:38.679162 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-01 19:03:38.679177 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-01 19:03:38.679197 | orchestrator | ++ export INTERACTIVE=false 2025-04-01 19:03:38.679215 | orchestrator | ++ INTERACTIVE=false 2025-04-01 19:03:38.679229 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-01 19:03:38.679243 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-01 19:03:38.679262 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-01 19:03:38.685099 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-04-01 19:03:38.685128 | orchestrator | + set -e 2025-04-01 19:03:38.692511 | orchestrator | + VERSION=8.1.0 2025-04-01 19:03:38.692540 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-04-01 19:03:38.692591 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-01 19:03:38.695995 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-01 19:03:38.696027 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-01 19:03:38.699437 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-04-01 19:03:38.705633 | orchestrator | /opt/configuration ~ 2025-04-01 19:03:38.708756 | orchestrator | + set -e 2025-04-01 19:03:38.708782 | orchestrator | + pushd /opt/configuration 2025-04-01 19:03:38.708796 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-01 19:03:38.708815 | orchestrator | + source /opt/venv/bin/activate 2025-04-01 19:03:38.709862 | orchestrator | ++ deactivate nondestructive 2025-04-01 19:03:38.709882 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:38.709896 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:38.709917 | orchestrator | ++ hash -r 2025-04-01 19:03:38.709931 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:38.709945 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-01 19:03:38.709963 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-01 19:03:38.709984 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-01 19:03:38.710051 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-01 19:03:38.710068 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-01 19:03:38.710087 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-01 19:03:38.710167 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-01 19:03:38.710190 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-01 19:03:38.710206 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-01 19:03:38.710224 | orchestrator | ++ export PATH 2025-04-01 19:03:38.710245 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:38.710259 | orchestrator | ++ '[' -z '' ']' 2025-04-01 19:03:38.710273 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-01 19:03:38.710287 | orchestrator | ++ PS1='(venv) ' 2025-04-01 19:03:38.710301 | orchestrator | ++ export PS1 2025-04-01 19:03:38.710316 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-01 19:03:38.710341 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-01 19:03:38.710445 | orchestrator | ++ hash -r 2025-04-01 19:03:38.710468 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-04-01 19:03:39.994361 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-04-01 19:03:39.995289 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-04-01 19:03:39.997208 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-04-01 19:03:39.998345 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-04-01 19:03:39.999663 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-04-01 19:03:40.013017 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-04-01 19:03:40.015015 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-04-01 19:03:40.016284 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-04-01 19:03:40.018446 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-04-01 19:03:40.063686 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-04-01 19:03:40.065834 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-04-01 19:03:40.067597 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-04-01 19:03:40.069569 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-04-01 19:03:40.074909 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-04-01 19:03:40.300425 | orchestrator | ++ which gilt 2025-04-01 19:03:40.303651 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-04-01 19:03:40.590822 | orchestrator | + /opt/venv/bin/gilt overlay 2025-04-01 19:03:40.590899 | orchestrator | osism.cfg-generics: 2025-04-01 19:03:42.179159 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-04-01 19:03:42.179310 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-04-01 19:03:42.179337 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-04-01 19:03:42.179845 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-04-01 19:03:42.179964 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-04-01 19:03:43.219351 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-04-01 19:03:43.228312 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-04-01 19:03:43.579969 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-04-01 19:03:43.663023 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-01 19:03:43.663131 | orchestrator | + deactivate 2025-04-01 19:03:43.663169 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-01 19:03:43.663187 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-01 19:03:43.663201 | orchestrator | + export PATH 2025-04-01 19:03:43.663216 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-01 19:03:43.663230 | orchestrator | + '[' -n '' ']' 2025-04-01 19:03:43.663244 | orchestrator | + hash -r 2025-04-01 19:03:43.663258 | orchestrator | + '[' -n '' ']' 2025-04-01 19:03:43.663273 | orchestrator | + unset VIRTUAL_ENV 2025-04-01 19:03:43.663287 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-01 19:03:43.663301 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-01 19:03:43.663315 | orchestrator | + unset -f deactivate 2025-04-01 19:03:43.663338 | orchestrator | ~ 2025-04-01 19:03:43.665271 | orchestrator | + popd 2025-04-01 19:03:43.665301 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-01 19:03:43.666323 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-01 19:03:43.666351 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-01 19:03:43.731694 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-01 19:03:43.731816 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-01 19:03:43.731839 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-01 19:03:43.782359 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-01 19:03:45.247378 | orchestrator | + source /opt/venv/bin/activate 2025-04-01 19:03:45.247496 | orchestrator | ++ deactivate nondestructive 2025-04-01 19:03:45.247515 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:45.247530 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:45.247590 | orchestrator | ++ hash -r 2025-04-01 19:03:45.247606 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:45.247620 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-01 19:03:45.247635 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-01 19:03:45.247649 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-01 19:03:45.247664 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-01 19:03:45.247678 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-01 19:03:45.247692 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-01 19:03:45.247706 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-01 19:03:45.247721 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-01 19:03:45.247735 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-01 19:03:45.247750 | orchestrator | ++ export PATH 2025-04-01 19:03:45.247764 | orchestrator | ++ '[' -n '' ']' 2025-04-01 19:03:45.247778 | orchestrator | ++ '[' -z '' ']' 2025-04-01 19:03:45.247791 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-01 19:03:45.247806 | orchestrator | ++ PS1='(venv) ' 2025-04-01 19:03:45.247820 | orchestrator | ++ export PS1 2025-04-01 19:03:45.247834 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-01 19:03:45.247847 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-01 19:03:45.247864 | orchestrator | ++ hash -r 2025-04-01 19:03:45.247879 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-01 19:03:45.247910 | orchestrator | 2025-04-01 19:03:45.945520 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-01 19:03:45.945666 | orchestrator | 2025-04-01 19:03:45.945685 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-01 19:03:45.945716 | orchestrator | ok: [testbed-manager] 2025-04-01 19:03:47.036237 | orchestrator | 2025-04-01 19:03:47.036347 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-01 19:03:47.036384 | orchestrator | changed: [testbed-manager] 2025-04-01 19:03:49.652742 | orchestrator | 2025-04-01 19:03:49.652851 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-01 19:03:49.652871 | orchestrator | 2025-04-01 19:03:49.652886 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 19:03:49.652913 | orchestrator | ok: [testbed-manager] 2025-04-01 19:03:55.749499 | orchestrator | 2025-04-01 19:03:55.749674 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-01 19:03:55.749738 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-01 19:05:22.716954 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-04-01 19:05:22.717127 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-04-01 19:05:22.717148 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-04-01 19:05:22.717164 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-04-01 19:05:22.717180 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-04-01 19:05:22.717195 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-04-01 19:05:22.717209 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-04-01 19:05:22.717223 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-04-01 19:05:22.717247 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-04-01 19:05:22.717263 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-04-01 19:05:22.717278 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-04-01 19:05:22.717292 | orchestrator | 2025-04-01 19:05:22.717307 | orchestrator | TASK [Check status] ************************************************************ 2025-04-01 19:05:22.717344 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-01 19:05:22.768252 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-01 19:05:22.768329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-01 19:05:22.768346 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-01 19:05:22.768363 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j998602236580.1585', 'results_file': '/home/dragon/.ansible_async/j998602236580.1585', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768395 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j187539521673.1610', 'results_file': '/home/dragon/.ansible_async/j187539521673.1610', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768411 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-01 19:05:22.768425 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-01 19:05:22.768440 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j530798442649.1635', 'results_file': '/home/dragon/.ansible_async/j530798442649.1635', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768461 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j721772497418.1667', 'results_file': '/home/dragon/.ansible_async/j721772497418.1667', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768481 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-01 19:05:22.768528 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j610501364172.1702', 'results_file': '/home/dragon/.ansible_async/j610501364172.1702', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768543 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j715944077538.1734', 'results_file': '/home/dragon/.ansible_async/j715944077538.1734', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768558 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-01 19:05:22.768599 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j109865391407.1766', 'results_file': '/home/dragon/.ansible_async/j109865391407.1766', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768615 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j784027801659.1798', 'results_file': '/home/dragon/.ansible_async/j784027801659.1798', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768630 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j389120387643.1832', 'results_file': '/home/dragon/.ansible_async/j389120387643.1832', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768644 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j236520710015.1865', 'results_file': '/home/dragon/.ansible_async/j236520710015.1865', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768659 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j22015946711.1905', 'results_file': '/home/dragon/.ansible_async/j22015946711.1905', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768673 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j311814971002.1931', 'results_file': '/home/dragon/.ansible_async/j311814971002.1931', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-04-01 19:05:22.768688 | orchestrator | 2025-04-01 19:05:22.768703 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-01 19:05:22.768730 | orchestrator | ok: [testbed-manager] 2025-04-01 19:05:23.288733 | orchestrator | 2025-04-01 19:05:23.288861 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-01 19:05:23.288897 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:23.680627 | orchestrator | 2025-04-01 19:05:23.680731 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-01 19:05:23.680765 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:24.026319 | orchestrator | 2025-04-01 19:05:24.026461 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-01 19:05:24.026552 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:24.068774 | orchestrator | 2025-04-01 19:05:24.068831 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-01 19:05:24.068859 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:05:24.453695 | orchestrator | 2025-04-01 19:05:24.453820 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-01 19:05:24.453855 | orchestrator | ok: [testbed-manager] 2025-04-01 19:05:24.655287 | orchestrator | 2025-04-01 19:05:24.655328 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-01 19:05:24.655352 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:05:26.664153 | orchestrator | 2025-04-01 19:05:26.664270 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-01 19:05:26.664286 | orchestrator | 2025-04-01 19:05:26.664301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 19:05:26.664337 | orchestrator | ok: [testbed-manager] 2025-04-01 19:05:26.895612 | orchestrator | 2025-04-01 19:05:26.895710 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-01 19:05:26.895735 | orchestrator | 2025-04-01 19:05:27.014945 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-01 19:05:27.015045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-01 19:05:28.224733 | orchestrator | 2025-04-01 19:05:28.224797 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-01 19:05:28.224826 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-01 19:05:30.281595 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-01 19:05:30.281664 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-01 19:05:30.281681 | orchestrator | 2025-04-01 19:05:30.281697 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-01 19:05:30.281726 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-01 19:05:31.045626 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-01 19:05:31.045744 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-01 19:05:31.045763 | orchestrator | 2025-04-01 19:05:31.045787 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-01 19:05:31.045833 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:05:31.759652 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:31.759761 | orchestrator | 2025-04-01 19:05:31.759779 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-01 19:05:31.759812 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:05:31.859815 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:31.859876 | orchestrator | 2025-04-01 19:05:31.859893 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-01 19:05:31.859919 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:05:32.284708 | orchestrator | 2025-04-01 19:05:32.284790 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-01 19:05:32.284819 | orchestrator | ok: [testbed-manager] 2025-04-01 19:05:32.393922 | orchestrator | 2025-04-01 19:05:32.393983 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-01 19:05:32.394009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-01 19:05:33.530966 | orchestrator | 2025-04-01 19:05:33.531105 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-01 19:05:33.531204 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:34.514679 | orchestrator | 2025-04-01 19:05:34.514746 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-01 19:05:34.514774 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:37.873409 | orchestrator | 2025-04-01 19:05:37.873599 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-01 19:05:37.873635 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:38.196955 | orchestrator | 2025-04-01 19:05:38.197073 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-01 19:05:38.197109 | orchestrator | 2025-04-01 19:05:38.312543 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-01 19:05:38.312640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-01 19:05:41.155177 | orchestrator | 2025-04-01 19:05:41.155335 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-01 19:05:41.155375 | orchestrator | ok: [testbed-manager] 2025-04-01 19:05:41.304557 | orchestrator | 2025-04-01 19:05:41.304613 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-01 19:05:41.304640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-01 19:05:42.583589 | orchestrator | 2025-04-01 19:05:42.583718 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-01 19:05:42.583757 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-01 19:05:42.697917 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-01 19:05:42.697965 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-01 19:05:42.697980 | orchestrator | 2025-04-01 19:05:42.697995 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-01 19:05:42.698064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-01 19:05:43.417231 | orchestrator | 2025-04-01 19:05:43.417334 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-01 19:05:43.417368 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-01 19:05:44.104929 | orchestrator | 2025-04-01 19:05:44.105043 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-01 19:05:44.105092 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:44.782902 | orchestrator | 2025-04-01 19:05:44.782995 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-01 19:05:44.783029 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:05:45.242100 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:45.242220 | orchestrator | 2025-04-01 19:05:45.242240 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-01 19:05:45.242273 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:45.629336 | orchestrator | 2025-04-01 19:05:45.629410 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-01 19:05:45.629440 | orchestrator | ok: [testbed-manager] 2025-04-01 19:05:45.699394 | orchestrator | 2025-04-01 19:05:45.699425 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-01 19:05:45.699447 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:05:46.382932 | orchestrator | 2025-04-01 19:05:46.383049 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-01 19:05:46.383084 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:46.497978 | orchestrator | 2025-04-01 19:05:46.498099 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-01 19:05:46.498141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-01 19:05:47.315059 | orchestrator | 2025-04-01 19:05:47.315152 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-01 19:05:47.315183 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-01 19:05:48.056510 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-01 19:05:48.056605 | orchestrator | 2025-04-01 19:05:48.056623 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-01 19:05:48.056654 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-01 19:05:48.778950 | orchestrator | 2025-04-01 19:05:48.779036 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-01 19:05:48.779067 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:48.842737 | orchestrator | 2025-04-01 19:05:48.842793 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-01 19:05:48.842818 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:05:49.560358 | orchestrator | 2025-04-01 19:05:49.560529 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-01 19:05:49.560574 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:51.784836 | orchestrator | 2025-04-01 19:05:51.784936 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-01 19:05:51.784970 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:05:58.273871 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:05:58.274086 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:05:58.274109 | orchestrator | changed: [testbed-manager] 2025-04-01 19:05:58.274127 | orchestrator | 2025-04-01 19:05:58.274142 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-01 19:05:58.274178 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-01 19:05:58.996042 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-01 19:05:58.996171 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-01 19:05:58.996190 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-01 19:05:58.996205 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-01 19:05:58.996220 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-01 19:05:58.996235 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-01 19:05:58.996284 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-01 19:05:58.996299 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-01 19:05:58.996316 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-01 19:05:58.996331 | orchestrator | 2025-04-01 19:05:58.996346 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-01 19:05:58.996379 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-01 19:05:59.184656 | orchestrator | 2025-04-01 19:05:59.184785 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-01 19:05:59.184823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-01 19:05:59.980935 | orchestrator | 2025-04-01 19:05:59.981081 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-01 19:05:59.981120 | orchestrator | changed: [testbed-manager] 2025-04-01 19:06:00.682672 | orchestrator | 2025-04-01 19:06:00.682805 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-01 19:06:00.682844 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:01.503404 | orchestrator | 2025-04-01 19:06:01.503609 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-01 19:06:01.503650 | orchestrator | changed: [testbed-manager] 2025-04-01 19:06:07.359608 | orchestrator | 2025-04-01 19:06:07.359733 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-01 19:06:07.359770 | orchestrator | changed: [testbed-manager] 2025-04-01 19:06:08.438315 | orchestrator | 2025-04-01 19:06:08.438440 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-01 19:06:08.438520 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:30.927393 | orchestrator | 2025-04-01 19:06:30.927520 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-01 19:06:30.927550 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-01 19:06:31.060365 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:31.060490 | orchestrator | 2025-04-01 19:06:31.060519 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-01 19:06:31.060546 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:31.125989 | orchestrator | 2025-04-01 19:06:31.126117 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-01 19:06:31.126134 | orchestrator | 2025-04-01 19:06:31.126145 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-01 19:06:31.126169 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:31.238870 | orchestrator | 2025-04-01 19:06:31.238940 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-01 19:06:31.238968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-01 19:06:32.204951 | orchestrator | 2025-04-01 19:06:32.205020 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-01 19:06:32.205048 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:32.310693 | orchestrator | 2025-04-01 19:06:32.310726 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-01 19:06:32.310748 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:32.376236 | orchestrator | 2025-04-01 19:06:32.376265 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-01 19:06:32.376286 | orchestrator | ok: [testbed-manager] => { 2025-04-01 19:06:33.145262 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-01 19:06:33.145382 | orchestrator | } 2025-04-01 19:06:33.145401 | orchestrator | 2025-04-01 19:06:33.145417 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-01 19:06:33.145483 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:34.223976 | orchestrator | 2025-04-01 19:06:34.224050 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-01 19:06:34.224079 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:34.321067 | orchestrator | 2025-04-01 19:06:34.321113 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-01 19:06:34.321137 | orchestrator | ok: [testbed-manager] 2025-04-01 19:06:34.390007 | orchestrator | 2025-04-01 19:06:34.390087 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-01 19:06:34.390122 | orchestrator | ok: [testbed-manager] => { 2025-04-01 19:06:34.466114 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-01 19:06:34.466153 | orchestrator | } 2025-04-01 19:06:34.466168 | orchestrator | 2025-04-01 19:06:34.466183 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-01 19:06:34.466204 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:34.535520 | orchestrator | 2025-04-01 19:06:34.535563 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-01 19:06:34.535586 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:34.615742 | orchestrator | 2025-04-01 19:06:34.615803 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-01 19:06:34.615828 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:34.703880 | orchestrator | 2025-04-01 19:06:34.703931 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-01 19:06:34.703955 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:34.783669 | orchestrator | 2025-04-01 19:06:34.783762 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-01 19:06:34.783795 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:34.860903 | orchestrator | 2025-04-01 19:06:34.860990 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-01 19:06:34.861028 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:06:36.165093 | orchestrator | 2025-04-01 19:06:36.165191 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-01 19:06:36.165226 | orchestrator | changed: [testbed-manager] 2025-04-01 19:06:36.296947 | orchestrator | 2025-04-01 19:06:36.297027 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-01 19:06:36.297057 | orchestrator | ok: [testbed-manager] 2025-04-01 19:07:36.371398 | orchestrator | 2025-04-01 19:07:36.371582 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-01 19:07:36.371619 | orchestrator | Pausing for 60 seconds 2025-04-01 19:07:36.493901 | orchestrator | changed: [testbed-manager] 2025-04-01 19:07:36.494003 | orchestrator | 2025-04-01 19:07:36.494079 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-01 19:07:36.494113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-01 19:12:21.248165 | orchestrator | 2025-04-01 19:12:21.248295 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-01 19:12:21.248370 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-01 19:12:23.482219 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-01 19:12:23.482386 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-01 19:12:23.482408 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-01 19:12:23.482424 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-01 19:12:23.482439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-01 19:12:23.482454 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-01 19:12:23.482468 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-01 19:12:23.482482 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-01 19:12:23.482496 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-01 19:12:23.482538 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-01 19:12:23.482554 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-01 19:12:23.482568 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-01 19:12:23.482582 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-01 19:12:23.482596 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-01 19:12:23.482610 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-01 19:12:23.482624 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-01 19:12:23.482638 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-01 19:12:23.482653 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-01 19:12:23.482677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-01 19:12:23.482692 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-01 19:12:23.482706 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-04-01 19:12:23.482721 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-04-01 19:12:23.482735 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-04-01 19:12:23.482749 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-04-01 19:12:23.482763 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-04-01 19:12:23.482777 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-04-01 19:12:23.482794 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:23.482810 | orchestrator | 2025-04-01 19:12:23.482827 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-01 19:12:23.482842 | orchestrator | 2025-04-01 19:12:23.482858 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 19:12:23.482888 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:23.620624 | orchestrator | 2025-04-01 19:12:23.620705 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-01 19:12:23.620736 | orchestrator | 2025-04-01 19:12:23.707833 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-01 19:12:23.707892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-01 19:12:25.732474 | orchestrator | 2025-04-01 19:12:25.732583 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-01 19:12:25.732617 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:25.789904 | orchestrator | 2025-04-01 19:12:25.789961 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-01 19:12:25.789988 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:25.891674 | orchestrator | 2025-04-01 19:12:25.891730 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-01 19:12:25.891755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-01 19:12:28.926920 | orchestrator | 2025-04-01 19:12:28.927040 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-01 19:12:28.927078 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-01 19:12:29.736696 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-01 19:12:29.736817 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-01 19:12:29.736835 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-01 19:12:29.736849 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-01 19:12:29.736864 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-01 19:12:29.736879 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-01 19:12:29.736893 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-01 19:12:29.736907 | orchestrator | 2025-04-01 19:12:29.736922 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-01 19:12:29.736952 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:29.830693 | orchestrator | 2025-04-01 19:12:29.830735 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-01 19:12:29.830760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-01 19:12:31.160080 | orchestrator | 2025-04-01 19:12:31.160186 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-01 19:12:31.160218 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-01 19:12:31.858902 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-01 19:12:31.859027 | orchestrator | 2025-04-01 19:12:31.859049 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-01 19:12:31.859083 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:31.922544 | orchestrator | 2025-04-01 19:12:31.922585 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-01 19:12:31.922609 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:12:32.001935 | orchestrator | 2025-04-01 19:12:32.001970 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-01 19:12:32.001994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-01 19:12:33.500140 | orchestrator | 2025-04-01 19:12:33.500198 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-01 19:12:33.500225 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:12:34.198567 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:12:34.198666 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:34.198683 | orchestrator | 2025-04-01 19:12:34.198698 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-01 19:12:34.198728 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:34.291403 | orchestrator | 2025-04-01 19:12:34.291455 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-01 19:12:34.291481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-01 19:12:35.016383 | orchestrator | 2025-04-01 19:12:35.016482 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-01 19:12:35.016512 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:12:35.707123 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:35.707216 | orchestrator | 2025-04-01 19:12:35.707232 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-01 19:12:35.707262 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:35.842652 | orchestrator | 2025-04-01 19:12:35.842699 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-01 19:12:35.842723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-01 19:12:36.399692 | orchestrator | 2025-04-01 19:12:36.399784 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-01 19:12:36.399830 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:36.809718 | orchestrator | 2025-04-01 19:12:36.809809 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-01 19:12:36.809833 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:38.116712 | orchestrator | 2025-04-01 19:12:38.116829 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-01 19:12:38.116900 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-01 19:12:38.941369 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-01 19:12:38.941452 | orchestrator | 2025-04-01 19:12:38.941461 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-01 19:12:38.941479 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:39.311431 | orchestrator | 2025-04-01 19:12:39.311511 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-01 19:12:39.311537 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:39.371696 | orchestrator | 2025-04-01 19:12:39.371725 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-01 19:12:39.371743 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:12:40.068225 | orchestrator | 2025-04-01 19:12:40.068396 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-01 19:12:40.068435 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:40.211659 | orchestrator | 2025-04-01 19:12:40.211697 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-01 19:12:40.211721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-01 19:12:40.269291 | orchestrator | 2025-04-01 19:12:40.269403 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-01 19:12:40.269432 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:42.493618 | orchestrator | 2025-04-01 19:12:42.493695 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-01 19:12:42.493723 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-01 19:12:43.321876 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-01 19:12:43.321965 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-01 19:12:43.321979 | orchestrator | 2025-04-01 19:12:43.321993 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-01 19:12:43.322062 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:44.142666 | orchestrator | 2025-04-01 19:12:44.142749 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-01 19:12:44.142776 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:44.242605 | orchestrator | 2025-04-01 19:12:44.242639 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-01 19:12:44.242660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-01 19:12:44.291060 | orchestrator | 2025-04-01 19:12:44.291093 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-01 19:12:44.291112 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:45.052780 | orchestrator | 2025-04-01 19:12:45.052869 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-01 19:12:45.052901 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-01 19:12:45.140793 | orchestrator | 2025-04-01 19:12:45.140848 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-01 19:12:45.140874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-01 19:12:45.943480 | orchestrator | 2025-04-01 19:12:45.943531 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-01 19:12:45.943556 | orchestrator | changed: [testbed-manager] 2025-04-01 19:12:46.615598 | orchestrator | 2025-04-01 19:12:46.615647 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-01 19:12:46.615669 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:46.678978 | orchestrator | 2025-04-01 19:12:46.679003 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-01 19:12:46.679020 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:12:46.743814 | orchestrator | 2025-04-01 19:12:46.743838 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-01 19:12:46.743854 | orchestrator | ok: [testbed-manager] 2025-04-01 19:12:47.645890 | orchestrator | 2025-04-01 19:12:47.646006 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-01 19:12:47.646125 | orchestrator | changed: [testbed-manager] 2025-04-01 19:13:30.282730 | orchestrator | 2025-04-01 19:13:30.282868 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-01 19:13:30.282908 | orchestrator | changed: [testbed-manager] 2025-04-01 19:13:30.988692 | orchestrator | 2025-04-01 19:13:30.988808 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-01 19:13:30.988845 | orchestrator | ok: [testbed-manager] 2025-04-01 19:13:33.772425 | orchestrator | 2025-04-01 19:13:33.772535 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-01 19:13:33.772567 | orchestrator | changed: [testbed-manager] 2025-04-01 19:13:33.844046 | orchestrator | 2025-04-01 19:13:33.844075 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-01 19:13:33.844095 | orchestrator | ok: [testbed-manager] 2025-04-01 19:13:33.923086 | orchestrator | 2025-04-01 19:13:33.923126 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-01 19:13:33.923140 | orchestrator | 2025-04-01 19:13:33.923153 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-01 19:13:33.923173 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:14:33.992362 | orchestrator | 2025-04-01 19:14:33.992501 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-01 19:14:33.992541 | orchestrator | Pausing for 60 seconds 2025-04-01 19:14:40.041860 | orchestrator | changed: [testbed-manager] 2025-04-01 19:14:40.042075 | orchestrator | 2025-04-01 19:14:40.042098 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-01 19:14:40.042135 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:21.885706 | orchestrator | 2025-04-01 19:15:21.885849 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-01 19:15:21.885876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-01 19:15:28.649013 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-01 19:15:28.649180 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:28.649199 | orchestrator | 2025-04-01 19:15:28.649215 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-01 19:15:28.649266 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:28.759553 | orchestrator | 2025-04-01 19:15:28.759604 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-01 19:15:28.759631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-01 19:15:28.828595 | orchestrator | 2025-04-01 19:15:28.828625 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-01 19:15:28.828640 | orchestrator | 2025-04-01 19:15:28.828654 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-01 19:15:28.828675 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:15:29.022764 | orchestrator | 2025-04-01 19:15:29.022808 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:15:29.022827 | orchestrator | testbed-manager : ok=105 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-01 19:15:29.022843 | orchestrator | 2025-04-01 19:15:29.022867 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-01 19:15:29.032612 | orchestrator | + deactivate 2025-04-01 19:15:29.032639 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-01 19:15:29.032656 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-01 19:15:29.032670 | orchestrator | + export PATH 2025-04-01 19:15:29.032685 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-01 19:15:29.032699 | orchestrator | + '[' -n '' ']' 2025-04-01 19:15:29.032714 | orchestrator | + hash -r 2025-04-01 19:15:29.032727 | orchestrator | + '[' -n '' ']' 2025-04-01 19:15:29.032742 | orchestrator | + unset VIRTUAL_ENV 2025-04-01 19:15:29.032756 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-01 19:15:29.032770 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-01 19:15:29.032784 | orchestrator | + unset -f deactivate 2025-04-01 19:15:29.032839 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-01 19:15:29.032860 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-01 19:15:29.034124 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-01 19:15:29.034151 | orchestrator | + local max_attempts=60 2025-04-01 19:15:29.034165 | orchestrator | + local name=ceph-ansible 2025-04-01 19:15:29.034180 | orchestrator | + local attempt_num=1 2025-04-01 19:15:29.034216 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-01 19:15:29.074481 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-01 19:15:29.075476 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-01 19:15:29.075502 | orchestrator | + local max_attempts=60 2025-04-01 19:15:29.075517 | orchestrator | + local name=kolla-ansible 2025-04-01 19:15:29.075531 | orchestrator | + local attempt_num=1 2025-04-01 19:15:29.075550 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-01 19:15:29.110599 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-01 19:15:29.111466 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-01 19:15:29.111493 | orchestrator | + local max_attempts=60 2025-04-01 19:15:29.111509 | orchestrator | + local name=osism-ansible 2025-04-01 19:15:29.111525 | orchestrator | + local attempt_num=1 2025-04-01 19:15:29.111550 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-01 19:15:29.144494 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-01 19:15:29.897845 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-01 19:15:29.897970 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-01 19:15:29.898007 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-01 19:15:29.961123 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-01 19:15:30.231621 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-01 19:15:30.231733 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-01 19:15:30.231769 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-01 19:15:30.238665 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238693 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238707 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-04-01 19:15:30.238746 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-04-01 19:15:30.238761 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238780 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238794 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238809 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 50 seconds (healthy) 2025-04-01 19:15:30.238823 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238837 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-04-01 19:15:30.238882 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238897 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238911 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-04-01 19:15:30.238925 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238939 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238953 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238967 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-04-01 19:15:30.238987 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-01 19:15:30.398479 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-01 19:15:30.407450 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-04-01 19:15:30.407488 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-04-01 19:15:30.407504 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-04-01 19:15:30.407519 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-04-01 19:15:30.407541 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-01 19:15:30.464806 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-01 19:15:30.472449 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-01 19:15:30.472486 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-01 19:15:32.365345 | orchestrator | 2025-04-01 19:15:32 | INFO  | Task 64aa57e9-4243-4997-a5fc-9048842454f2 (resolvconf) was prepared for execution. 2025-04-01 19:15:36.160153 | orchestrator | 2025-04-01 19:15:32 | INFO  | It takes a moment until task 64aa57e9-4243-4997-a5fc-9048842454f2 (resolvconf) has been started and output is visible here. 2025-04-01 19:15:36.160380 | orchestrator | 2025-04-01 19:15:36.160522 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-01 19:15:36.161202 | orchestrator | 2025-04-01 19:15:36.163278 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 19:15:36.165709 | orchestrator | Tuesday 01 April 2025 19:15:36 +0000 (0:00:00.123) 0:00:00.123 ********* 2025-04-01 19:15:40.859687 | orchestrator | ok: [testbed-manager] 2025-04-01 19:15:40.860336 | orchestrator | 2025-04-01 19:15:40.860384 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-01 19:15:40.861000 | orchestrator | Tuesday 01 April 2025 19:15:40 +0000 (0:00:04.699) 0:00:04.823 ********* 2025-04-01 19:15:40.937220 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:15:40.937853 | orchestrator | 2025-04-01 19:15:40.938969 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-01 19:15:41.043882 | orchestrator | Tuesday 01 April 2025 19:15:40 +0000 (0:00:00.079) 0:00:04.902 ********* 2025-04-01 19:15:41.044035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-01 19:15:41.044802 | orchestrator | 2025-04-01 19:15:41.044845 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-01 19:15:41.158925 | orchestrator | Tuesday 01 April 2025 19:15:41 +0000 (0:00:00.107) 0:00:05.010 ********* 2025-04-01 19:15:41.159031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-01 19:15:41.159732 | orchestrator | 2025-04-01 19:15:41.161012 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-01 19:15:41.161358 | orchestrator | Tuesday 01 April 2025 19:15:41 +0000 (0:00:00.113) 0:00:05.123 ********* 2025-04-01 19:15:42.483935 | orchestrator | ok: [testbed-manager] 2025-04-01 19:15:42.484504 | orchestrator | 2025-04-01 19:15:42.484542 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-01 19:15:42.484836 | orchestrator | Tuesday 01 April 2025 19:15:42 +0000 (0:00:01.308) 0:00:06.432 ********* 2025-04-01 19:15:42.539173 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:15:42.539674 | orchestrator | 2025-04-01 19:15:42.540156 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-01 19:15:42.541004 | orchestrator | Tuesday 01 April 2025 19:15:42 +0000 (0:00:00.072) 0:00:06.504 ********* 2025-04-01 19:15:43.090724 | orchestrator | ok: [testbed-manager] 2025-04-01 19:15:43.163364 | orchestrator | 2025-04-01 19:15:43.163403 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-01 19:15:43.163420 | orchestrator | Tuesday 01 April 2025 19:15:43 +0000 (0:00:00.544) 0:00:07.049 ********* 2025-04-01 19:15:43.163442 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:15:43.163881 | orchestrator | 2025-04-01 19:15:43.164371 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-01 19:15:43.165017 | orchestrator | Tuesday 01 April 2025 19:15:43 +0000 (0:00:00.079) 0:00:07.129 ********* 2025-04-01 19:15:43.815063 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:45.192492 | orchestrator | 2025-04-01 19:15:45.192619 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-01 19:15:45.192637 | orchestrator | Tuesday 01 April 2025 19:15:43 +0000 (0:00:00.650) 0:00:07.779 ********* 2025-04-01 19:15:45.192669 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:45.193177 | orchestrator | 2025-04-01 19:15:45.193347 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-01 19:15:45.195574 | orchestrator | Tuesday 01 April 2025 19:15:45 +0000 (0:00:01.374) 0:00:09.154 ********* 2025-04-01 19:15:46.299845 | orchestrator | ok: [testbed-manager] 2025-04-01 19:15:46.393830 | orchestrator | 2025-04-01 19:15:46.393894 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-01 19:15:46.393910 | orchestrator | Tuesday 01 April 2025 19:15:46 +0000 (0:00:01.107) 0:00:10.262 ********* 2025-04-01 19:15:46.393937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-01 19:15:46.396146 | orchestrator | 2025-04-01 19:15:46.397847 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-01 19:15:46.397876 | orchestrator | Tuesday 01 April 2025 19:15:46 +0000 (0:00:00.097) 0:00:10.360 ********* 2025-04-01 19:15:47.750167 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:47.750374 | orchestrator | 2025-04-01 19:15:47.753391 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:15:47.754279 | orchestrator | 2025-04-01 19:15:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:15:47.754386 | orchestrator | 2025-04-01 19:15:47 | INFO  | Please wait and do not abort execution. 2025-04-01 19:15:47.755772 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:15:47.756743 | orchestrator | 2025-04-01 19:15:47.757704 | orchestrator | Tuesday 01 April 2025 19:15:47 +0000 (0:00:01.354) 0:00:11.715 ********* 2025-04-01 19:15:47.758667 | orchestrator | =============================================================================== 2025-04-01 19:15:47.759408 | orchestrator | Gathering Facts --------------------------------------------------------- 4.70s 2025-04-01 19:15:47.760406 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.37s 2025-04-01 19:15:47.761502 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.36s 2025-04-01 19:15:47.762341 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.31s 2025-04-01 19:15:47.763105 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.11s 2025-04-01 19:15:47.763867 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.65s 2025-04-01 19:15:47.764128 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2025-04-01 19:15:47.765231 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.11s 2025-04-01 19:15:47.765631 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2025-04-01 19:15:47.766280 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-04-01 19:15:47.766645 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-04-01 19:15:47.766845 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-04-01 19:15:47.767108 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-04-01 19:15:48.334204 | orchestrator | + osism apply sshconfig 2025-04-01 19:15:50.028416 | orchestrator | 2025-04-01 19:15:50 | INFO  | Task f4f42890-db02-43ee-8385-c3e6cc9a1510 (sshconfig) was prepared for execution. 2025-04-01 19:15:53.622440 | orchestrator | 2025-04-01 19:15:50 | INFO  | It takes a moment until task f4f42890-db02-43ee-8385-c3e6cc9a1510 (sshconfig) has been started and output is visible here. 2025-04-01 19:15:53.622626 | orchestrator | 2025-04-01 19:15:53.623240 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-01 19:15:53.623271 | orchestrator | 2025-04-01 19:15:53.623611 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-01 19:15:53.624826 | orchestrator | Tuesday 01 April 2025 19:15:53 +0000 (0:00:00.125) 0:00:00.125 ********* 2025-04-01 19:15:54.229175 | orchestrator | ok: [testbed-manager] 2025-04-01 19:15:54.229606 | orchestrator | 2025-04-01 19:15:54.230140 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-01 19:15:54.230870 | orchestrator | Tuesday 01 April 2025 19:15:54 +0000 (0:00:00.607) 0:00:00.733 ********* 2025-04-01 19:15:54.735584 | orchestrator | changed: [testbed-manager] 2025-04-01 19:15:54.736736 | orchestrator | 2025-04-01 19:15:54.737866 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-01 19:15:54.738143 | orchestrator | Tuesday 01 April 2025 19:15:54 +0000 (0:00:00.506) 0:00:01.240 ********* 2025-04-01 19:16:01.147148 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-01 19:16:01.147413 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-01 19:16:01.147441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-01 19:16:01.149520 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-01 19:16:01.149713 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-01 19:16:01.150738 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-01 19:16:01.151242 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-01 19:16:01.152496 | orchestrator | 2025-04-01 19:16:01.153138 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-01 19:16:01.153162 | orchestrator | Tuesday 01 April 2025 19:16:01 +0000 (0:00:06.410) 0:00:07.651 ********* 2025-04-01 19:16:01.221179 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:16:01.222681 | orchestrator | 2025-04-01 19:16:01.222729 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-01 19:16:01.849659 | orchestrator | Tuesday 01 April 2025 19:16:01 +0000 (0:00:00.074) 0:00:07.725 ********* 2025-04-01 19:16:01.849768 | orchestrator | changed: [testbed-manager] 2025-04-01 19:16:01.850451 | orchestrator | 2025-04-01 19:16:01.851258 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:16:01.853900 | orchestrator | 2025-04-01 19:16:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:16:01.855364 | orchestrator | 2025-04-01 19:16:01 | INFO  | Please wait and do not abort execution. 2025-04-01 19:16:01.855397 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:16:01.856203 | orchestrator | 2025-04-01 19:16:01.857490 | orchestrator | Tuesday 01 April 2025 19:16:01 +0000 (0:00:00.630) 0:00:08.355 ********* 2025-04-01 19:16:01.858552 | orchestrator | =============================================================================== 2025-04-01 19:16:01.858583 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.41s 2025-04-01 19:16:01.859096 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-04-01 19:16:01.860135 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.61s 2025-04-01 19:16:01.860792 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-04-01 19:16:01.861354 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-04-01 19:16:02.474393 | orchestrator | + osism apply known-hosts 2025-04-01 19:16:04.120222 | orchestrator | 2025-04-01 19:16:04 | INFO  | Task 493a8817-c43e-4049-bbbf-18ddeb49a7f2 (known-hosts) was prepared for execution. 2025-04-01 19:16:07.593410 | orchestrator | 2025-04-01 19:16:04 | INFO  | It takes a moment until task 493a8817-c43e-4049-bbbf-18ddeb49a7f2 (known-hosts) has been started and output is visible here. 2025-04-01 19:16:07.593498 | orchestrator | 2025-04-01 19:16:07.595504 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-01 19:16:07.595528 | orchestrator | 2025-04-01 19:16:07.596777 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-01 19:16:07.597089 | orchestrator | Tuesday 01 April 2025 19:16:07 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-04-01 19:16:13.623853 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-01 19:16:13.624083 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-01 19:16:13.624117 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-01 19:16:13.624210 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-01 19:16:13.626208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-01 19:16:13.629980 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-01 19:16:13.630984 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-01 19:16:13.632466 | orchestrator | 2025-04-01 19:16:13.633484 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-01 19:16:13.633755 | orchestrator | Tuesday 01 April 2025 19:16:13 +0000 (0:00:06.033) 0:00:06.163 ********* 2025-04-01 19:16:13.832983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-01 19:16:13.836203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-01 19:16:13.836436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-01 19:16:13.837523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-01 19:16:13.839581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-01 19:16:13.840706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-01 19:16:13.841727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-01 19:16:13.842717 | orchestrator | 2025-04-01 19:16:13.843499 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:13.844118 | orchestrator | Tuesday 01 April 2025 19:16:13 +0000 (0:00:00.209) 0:00:06.373 ********* 2025-04-01 19:16:15.168477 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC89c9r4SshYTTPaGdZGfjo8ywoG12V3m821LaocDIpNjXDPsm0NDsHP2Bb6zrLGCEtxFBV2O48MzgeDQrOhgCkHA6+nR9rnFMxlxQmPwOD4DzMxUyAlGGbKUVZDdrolKbrPAjtKVxcQ6pPxChzN4miOTpXoTaNJ/WwL+AEaatZUw1AdYpoBR6PRvu8AJjD+jCztLubdPZAxVNmKO1Y7D/oQ6SZCPtTyrSH2OJQqNFjkij4GfRUwHgqbCSXmi0OGU+ZqDCyFLemEZxz/jgYbO24eL8SgN6btY0QLacrB4O6qutUCFpDk5QnC537qRG0JEOOvFnFpp+Q7elsZieAtcLqIqcqIXhmXSJLtlIK+Nhu/T18rPCtrcTe7ee9o+U2ru4Bu5hHT/SOb//aR/DoKFFgfviQzPBsAUhP28rGmHd9e3BMmsMITgV273FxWCdP7KZd2gkWk+iBa49xNuK54vL1KMc0qAGTGfS4CCgMpNL0tVGlOXckn9QxDw9Tj4Rim9E=) 2025-04-01 19:16:15.169016 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTfeLzrA/ykXL7MXlQcUPb1Eu1xOBlTyoub1G4APnrhB1Jsqr/yfOw/BZVcSdGtpntQ/Aj9MnniHdZVy/MdRYk=) 2025-04-01 19:16:15.169047 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL9fUvL6Upj651D19e6D8UtsKTEHjgp7ocFt6JS+mywJ) 2025-04-01 19:16:15.169068 | orchestrator | 2025-04-01 19:16:15.169651 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:15.170580 | orchestrator | Tuesday 01 April 2025 19:16:15 +0000 (0:00:01.331) 0:00:07.705 ********* 2025-04-01 19:16:16.420019 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVWLFKtRe7SwJF2mM73iK2iqepKeYTAHBOhu/ExVVRAp9UHxVUVMZyVBxI/kqqSeI/4auS6rVDB2RVVpgYXnZ0wWwsCGtqKqbxsaC+vzMgPlc62koS1JHxpy97Cg7HDzhdNs29TFnqYzgbdmnLD2sIQjB5myKdk/gDGbB5uy5/HDTFurLIAM0om9mG2UVbgeGO7Wixv2pkxnH9QWBbNWuO1mGVsAfOXV7fCkHBpW5kHNycMBuejH0khIzXWOIuJ4q3f6jLsRjyC6yn1jnxFu0eA/9KqoUgWEnaaUgXtN3U/T1B1SwYtdD2y7iYdOTYFo4xXjd2+Otp2XIg4QCnY3rSh0IADUS+h9Oc86IAwzhmDEG7wyAl1BoExd8e8IUpxEOEBdbb0hxnEzTK4kh2Do4K1+sc2TS9Lo3UASYWKQS7iSKUtorp7CD0lbfU5krELMu2OLsx10b3kDxRzExeDWqmBl8dGnXYFd9EqtluyWLwV3P/QozPWcy11FwMikYf5u0=) 2025-04-01 19:16:16.420340 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwHHFGvxudf4C35rlqj43dIfLm3Urq5ordnVmkMEYAUwPy7G0cvvKH+38NeeGo1qbvG3CYhJTNZihCT1hzJsGA=) 2025-04-01 19:16:16.420573 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKo9dzbE9gS7rcsCqV32gUmkVCjdcjMYy6DM2Xgwn3Hb) 2025-04-01 19:16:16.420605 | orchestrator | 2025-04-01 19:16:16.421168 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:16.421393 | orchestrator | Tuesday 01 April 2025 19:16:16 +0000 (0:00:01.254) 0:00:08.959 ********* 2025-04-01 19:16:17.661963 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxwP7LhNv4g3HUgj0yDyGNrY7Vqgi9IFMHXEjVy8KFfvbB5kNMQjQiGv0QfuBv6YDZdpHkdEXHY/gqt8kYmd/eDRMf+oxuIVi9kPhNZW41guVVug+Hx7VaT/BzFCmNWpcvC4vhMtfhX1Mth3L57atLfh0UTlVIlIA+gnuJTlVVTosiAp0OEB8uRgi+dBeWkclFPs2Sf6YnsHOqm6dugUNlQQv86Y75LrY6goSbHdhRMkzcMUHB84u4rrKIDYXcDvH293ZKIoGErgeP9IkrU4xHvAQSOi3gLINtcAgd6uVMaK9WsNzj948Zg2PhjmLelWHgX5P97r9MqQjK5ORj6RmFwWAexlCdNLbHyVE6iY20QcUW4Pw2TzvX9lIAWPf9SUf/i+yivshsd5B4YyAC4mSHq/vjOhJfvTm4chl/gxp/0WsJyzoZOJuhXQl6Qc+GzAxk7EGyyzBEsEqbnLC1bgjlke7chjgeck+wpEgJ5njQYN3hn83Hk3+x1tJNd3MMa0s=) 2025-04-01 19:16:17.662230 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDa361O1dkUR8FIyNLggXH7cG7NFKuwguq/ndTpU8jxqg7mVn4jnFhScpU6JqFXHAUV4lMIMiDJRpfh81okmpiI=) 2025-04-01 19:16:17.663673 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6iVSAZLsQAs6Fx+EwpaEqKzGfmkg/NjmFmyrqxeDIr) 2025-04-01 19:16:17.664325 | orchestrator | 2025-04-01 19:16:17.664934 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:17.665373 | orchestrator | Tuesday 01 April 2025 19:16:17 +0000 (0:00:01.241) 0:00:10.201 ********* 2025-04-01 19:16:18.856155 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfn0CUNOGe6/pcBOBzykP2ckk+2fatZtDBFDoSYVcr5y6vSeAQ7CpAeAClNCOBeVbERZxgh+wiTIBgWbpEuAkVmm9AZMCI/50YqCqVaNb6vNR/y4YgHAXuEByhaqlmQWDXXEmW+x3CcWnI7sBE8eH0Fpr/LwwS9rNhbFW3SfUteG2X6xYoQoXjM8nFiJFmY1bPj16Ux/n6o19lf6o4WqPIbBwFTH6w4ftBeR9BL54sJ4Y5qzqq0XMBFil4U53Ym0vutnHrrbs/T4i2s0FgeQvVObvgOwQzZ21To8kuv/AGgNAzKuYs4gTAN0721tLu9hgTxhvZoon+/QcZzooZflaDdy+HdYoaXjw+vrORbYnl4+tm3VLlxNYDyQtlq4/W6FApQZagfyaD3BWGGZzrVf+C2SWhxssb3mLt9fsDsLPZBcs1lq+5iVn59Q+ZSgUXAcYhn6UnTlKs/kQs3AmRPJ4aM+BMpoLK52rIgU5fK01SYH0cB438VDA1b67gs4vAdYM=) 2025-04-01 19:16:18.858631 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDOloAhQ2qz1GK59+9uM7W/9FOVKm3accacc3mdDcEplYM5TiEztCC8HE7EMp3IN+hD1ALUU2v6ZJYY5GXq/1yo=) 2025-04-01 19:16:18.860978 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMWQ+d5J3Ed0cj6KXKWasRyA1KOTY4mKw9IP9AIc8z77) 2025-04-01 19:16:18.861015 | orchestrator | 2025-04-01 19:16:18.861770 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:18.862438 | orchestrator | Tuesday 01 April 2025 19:16:18 +0000 (0:00:01.193) 0:00:11.394 ********* 2025-04-01 19:16:20.089150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY8KeElVjwDtUs2OE1fE1Tbbc9jytGCS7BdxcL0Xz8yn3kC3BaUOANMh4m84tuxbnXnulaKi/zFSfaKIcnlalarpeoT1F/lBMgoavjg8aOtDSm+o7mQgotUE5cPFM2/jj36A7l1hTH1vObrKZE4AqB3Q34J2f5/8xTNaDVhAPPrClS6DSOXEmjDZYAN1Q0cF1VUHy4ytBK3S7h+O6bd/G8RQ29KS5rD7wRg8mvOmCK5DqskqyFT1oF1rHY8TcSFGigwxQ1BZ2Tg+esrKTk8SgWssCLdghj6uCcAN9wfbQJHlFQLnMh1wFrrNKRRPLlMcY84fvs9ecZNv6yHc4ZcH5IDH9vf1Iyo/bhaPhc4H1p11ufaH0YQIh04yjqGxHzg/h2i6LUOK831IumdYU5evqLkI0eoJuxOas5Y0CeTsuAzcde2GCoY4iFVRBHYvvSvFWy5m/903KKhcJlsad7Xi8Hiltm6PQyp3eJbSu8LtGZFOH/cYF8ABID+EDIYCWTLIM=) 2025-04-01 19:16:20.089472 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKHDQFmQBa/Fh+dBK++dclKPjgg6DNlwInuwK6pT2XrgCbvKF5kDJplqg0VKjvIT+efJqAbopw/hMNTfuGnNZyo=) 2025-04-01 19:16:20.090632 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILlkp0YU4jXKvSFUa0lniXV7EO9wV3jzOHDy2xtkwJvI) 2025-04-01 19:16:20.091835 | orchestrator | 2025-04-01 19:16:20.092691 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:20.093352 | orchestrator | Tuesday 01 April 2025 19:16:20 +0000 (0:00:01.233) 0:00:12.628 ********* 2025-04-01 19:16:21.314672 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjDaX9tFty0qT1MoiKRR7sfwKyV6sVF5hz4wwCCZqO2AHslu5dSUYB+waM+CN0K8lSmEmdQ+7aVZweZCQMJKo7OaUOkI04iHfkdayPDYBHXHSrCETDJpqTXjY/qS1ZGclp3DM5ICJ/1lJt12yZAu8cNQ1YnVanztmOjB+p6NZlPgj8++dnV2y2YJTjTU3ae8gV3ptBrkAzsqXENKnTI5NRS0etBqrk28Q96j5jvEHYr8ciQLaXsp7SGhWFeBI1TbjYHJUWj7dmnke3IC2HypeR/SU0US6Ap2ClMgLhzp8VKb2/bJcCEz3pQk2kRBz1blJwQ4447+9nCRhWlcyRhf61x3Qt99ZpmRLEQfqCgdTimSUxHJ6SZ6SMEAe4+xKXEHxacis8fpAvVgSc4QNQ660rVTNsUZCBXjx7jZfcVE7rFsDBDpuTmUe+sNMo6DH9fzhMZNEeSSVqqtuHc1uXAvVniIpzJ0htygz7/aLBaNOsoW3i79NL4MZixJ8LO962G/8=) 2025-04-01 19:16:21.315010 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1jcZvMpJ5A+xfzYS7to7FZtCEEM41qEU7YCxR+N3t35C1SR7C1e7cEXRBL6lSpuIM+wP5hmPlT7yj5AjzycYs=) 2025-04-01 19:16:21.316806 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAA1jTRQ32sSt/bYh10l7QkovlwD98e3Ihj3E2IBiiIV) 2025-04-01 19:16:21.317042 | orchestrator | 2025-04-01 19:16:21.317795 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:21.318336 | orchestrator | Tuesday 01 April 2025 19:16:21 +0000 (0:00:01.225) 0:00:13.854 ********* 2025-04-01 19:16:22.532928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2kk+LVJ750CECRkxpP+Zd51VFP0ky1PYIMplOWctiUOtoxKu+HiWGouY9IGQ3eWQwGZsCHz3whAOuIoSs8BW3LJ/2HUv/U+vNGV6DUl7VtqLmanAI9pCsOuWs0xb3kPi4J74gQMtsFdUiavH3OCRvznnYthqbd557PdkOHQ5JkI5eczscFkLaUOrNDzGUq189GAotNjsF3hGIFs1BDFKLL9XFEhVxLHonOuWMS6bLOiOQfhBNuhY/4ZSYnawN5IIX7NRo3VC2hHH1dWeqVN9OOsk3oH688N7d4Ck+Cl+1+GbKoded9Xi5WNIApDb4YZ2VqMsizQAFf9KCMz2zj8gpR+9NjCVGmKqKjOoXlAi6ztfDR1hPGpYeoM8iUS9YjdX+DEYmtcDo8xytBDCWvLsqXOfbTncx01kKpmX4yvMjrr2iDDooC9ceKu+v+Gkv9FSa2Rahfyf9aYJy46ZASKKB/vAS+LtwhopaTHE90cqzPsG9S5eQvG+y8BAU9enm1Oc=) 2025-04-01 19:16:22.534532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENAwkn1fJq01l6DXhTCX7BvFe1GnPaV6jRbHm/JY4kyAuXfcFusdeTTSUUYMrGAteaTKDAOHPazhwBv9TZN/LA=) 2025-04-01 19:16:22.534583 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII7nG7LRX9LmDHQByIPjKu8VbyX+e1D5XhDWHswwxsa1) 2025-04-01 19:16:22.535562 | orchestrator | 2025-04-01 19:16:22.535937 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-01 19:16:22.537233 | orchestrator | Tuesday 01 April 2025 19:16:22 +0000 (0:00:01.217) 0:00:15.072 ********* 2025-04-01 19:16:27.927727 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-01 19:16:27.928801 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-01 19:16:27.928844 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-01 19:16:27.929506 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-01 19:16:27.930345 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-01 19:16:27.931599 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-01 19:16:27.932037 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-01 19:16:27.933128 | orchestrator | 2025-04-01 19:16:27.933590 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-01 19:16:27.933949 | orchestrator | Tuesday 01 April 2025 19:16:27 +0000 (0:00:05.394) 0:00:20.467 ********* 2025-04-01 19:16:28.134117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-01 19:16:28.134558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-01 19:16:28.135551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-01 19:16:28.136422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-01 19:16:28.136877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-01 19:16:28.137572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-01 19:16:28.138078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-01 19:16:28.138781 | orchestrator | 2025-04-01 19:16:28.139441 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:28.139938 | orchestrator | Tuesday 01 April 2025 19:16:28 +0000 (0:00:00.208) 0:00:20.675 ********* 2025-04-01 19:16:29.311615 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC89c9r4SshYTTPaGdZGfjo8ywoG12V3m821LaocDIpNjXDPsm0NDsHP2Bb6zrLGCEtxFBV2O48MzgeDQrOhgCkHA6+nR9rnFMxlxQmPwOD4DzMxUyAlGGbKUVZDdrolKbrPAjtKVxcQ6pPxChzN4miOTpXoTaNJ/WwL+AEaatZUw1AdYpoBR6PRvu8AJjD+jCztLubdPZAxVNmKO1Y7D/oQ6SZCPtTyrSH2OJQqNFjkij4GfRUwHgqbCSXmi0OGU+ZqDCyFLemEZxz/jgYbO24eL8SgN6btY0QLacrB4O6qutUCFpDk5QnC537qRG0JEOOvFnFpp+Q7elsZieAtcLqIqcqIXhmXSJLtlIK+Nhu/T18rPCtrcTe7ee9o+U2ru4Bu5hHT/SOb//aR/DoKFFgfviQzPBsAUhP28rGmHd9e3BMmsMITgV273FxWCdP7KZd2gkWk+iBa49xNuK54vL1KMc0qAGTGfS4CCgMpNL0tVGlOXckn9QxDw9Tj4Rim9E=) 2025-04-01 19:16:29.311931 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL9fUvL6Upj651D19e6D8UtsKTEHjgp7ocFt6JS+mywJ) 2025-04-01 19:16:29.312715 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTfeLzrA/ykXL7MXlQcUPb1Eu1xOBlTyoub1G4APnrhB1Jsqr/yfOw/BZVcSdGtpntQ/Aj9MnniHdZVy/MdRYk=) 2025-04-01 19:16:29.313522 | orchestrator | 2025-04-01 19:16:29.314711 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:29.315224 | orchestrator | Tuesday 01 April 2025 19:16:29 +0000 (0:00:01.175) 0:00:21.850 ********* 2025-04-01 19:16:30.486439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVWLFKtRe7SwJF2mM73iK2iqepKeYTAHBOhu/ExVVRAp9UHxVUVMZyVBxI/kqqSeI/4auS6rVDB2RVVpgYXnZ0wWwsCGtqKqbxsaC+vzMgPlc62koS1JHxpy97Cg7HDzhdNs29TFnqYzgbdmnLD2sIQjB5myKdk/gDGbB5uy5/HDTFurLIAM0om9mG2UVbgeGO7Wixv2pkxnH9QWBbNWuO1mGVsAfOXV7fCkHBpW5kHNycMBuejH0khIzXWOIuJ4q3f6jLsRjyC6yn1jnxFu0eA/9KqoUgWEnaaUgXtN3U/T1B1SwYtdD2y7iYdOTYFo4xXjd2+Otp2XIg4QCnY3rSh0IADUS+h9Oc86IAwzhmDEG7wyAl1BoExd8e8IUpxEOEBdbb0hxnEzTK4kh2Do4K1+sc2TS9Lo3UASYWKQS7iSKUtorp7CD0lbfU5krELMu2OLsx10b3kDxRzExeDWqmBl8dGnXYFd9EqtluyWLwV3P/QozPWcy11FwMikYf5u0=) 2025-04-01 19:16:30.486637 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIwHHFGvxudf4C35rlqj43dIfLm3Urq5ordnVmkMEYAUwPy7G0cvvKH+38NeeGo1qbvG3CYhJTNZihCT1hzJsGA=) 2025-04-01 19:16:30.487026 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKo9dzbE9gS7rcsCqV32gUmkVCjdcjMYy6DM2Xgwn3Hb) 2025-04-01 19:16:30.488329 | orchestrator | 2025-04-01 19:16:30.488660 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:30.489513 | orchestrator | Tuesday 01 April 2025 19:16:30 +0000 (0:00:01.176) 0:00:23.026 ********* 2025-04-01 19:16:31.710548 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxwP7LhNv4g3HUgj0yDyGNrY7Vqgi9IFMHXEjVy8KFfvbB5kNMQjQiGv0QfuBv6YDZdpHkdEXHY/gqt8kYmd/eDRMf+oxuIVi9kPhNZW41guVVug+Hx7VaT/BzFCmNWpcvC4vhMtfhX1Mth3L57atLfh0UTlVIlIA+gnuJTlVVTosiAp0OEB8uRgi+dBeWkclFPs2Sf6YnsHOqm6dugUNlQQv86Y75LrY6goSbHdhRMkzcMUHB84u4rrKIDYXcDvH293ZKIoGErgeP9IkrU4xHvAQSOi3gLINtcAgd6uVMaK9WsNzj948Zg2PhjmLelWHgX5P97r9MqQjK5ORj6RmFwWAexlCdNLbHyVE6iY20QcUW4Pw2TzvX9lIAWPf9SUf/i+yivshsd5B4YyAC4mSHq/vjOhJfvTm4chl/gxp/0WsJyzoZOJuhXQl6Qc+GzAxk7EGyyzBEsEqbnLC1bgjlke7chjgeck+wpEgJ5njQYN3hn83Hk3+x1tJNd3MMa0s=) 2025-04-01 19:16:31.711993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDa361O1dkUR8FIyNLggXH7cG7NFKuwguq/ndTpU8jxqg7mVn4jnFhScpU6JqFXHAUV4lMIMiDJRpfh81okmpiI=) 2025-04-01 19:16:31.712741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII6iVSAZLsQAs6Fx+EwpaEqKzGfmkg/NjmFmyrqxeDIr) 2025-04-01 19:16:31.713554 | orchestrator | 2025-04-01 19:16:31.714176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:31.714974 | orchestrator | Tuesday 01 April 2025 19:16:31 +0000 (0:00:01.221) 0:00:24.248 ********* 2025-04-01 19:16:32.862405 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMWQ+d5J3Ed0cj6KXKWasRyA1KOTY4mKw9IP9AIc8z77) 2025-04-01 19:16:32.863047 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfn0CUNOGe6/pcBOBzykP2ckk+2fatZtDBFDoSYVcr5y6vSeAQ7CpAeAClNCOBeVbERZxgh+wiTIBgWbpEuAkVmm9AZMCI/50YqCqVaNb6vNR/y4YgHAXuEByhaqlmQWDXXEmW+x3CcWnI7sBE8eH0Fpr/LwwS9rNhbFW3SfUteG2X6xYoQoXjM8nFiJFmY1bPj16Ux/n6o19lf6o4WqPIbBwFTH6w4ftBeR9BL54sJ4Y5qzqq0XMBFil4U53Ym0vutnHrrbs/T4i2s0FgeQvVObvgOwQzZ21To8kuv/AGgNAzKuYs4gTAN0721tLu9hgTxhvZoon+/QcZzooZflaDdy+HdYoaXjw+vrORbYnl4+tm3VLlxNYDyQtlq4/W6FApQZagfyaD3BWGGZzrVf+C2SWhxssb3mLt9fsDsLPZBcs1lq+5iVn59Q+ZSgUXAcYhn6UnTlKs/kQs3AmRPJ4aM+BMpoLK52rIgU5fK01SYH0cB438VDA1b67gs4vAdYM=) 2025-04-01 19:16:32.863862 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDOloAhQ2qz1GK59+9uM7W/9FOVKm3accacc3mdDcEplYM5TiEztCC8HE7EMp3IN+hD1ALUU2v6ZJYY5GXq/1yo=) 2025-04-01 19:16:32.865307 | orchestrator | 2025-04-01 19:16:32.866224 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:32.866597 | orchestrator | Tuesday 01 April 2025 19:16:32 +0000 (0:00:01.153) 0:00:25.402 ********* 2025-04-01 19:16:34.069625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY8KeElVjwDtUs2OE1fE1Tbbc9jytGCS7BdxcL0Xz8yn3kC3BaUOANMh4m84tuxbnXnulaKi/zFSfaKIcnlalarpeoT1F/lBMgoavjg8aOtDSm+o7mQgotUE5cPFM2/jj36A7l1hTH1vObrKZE4AqB3Q34J2f5/8xTNaDVhAPPrClS6DSOXEmjDZYAN1Q0cF1VUHy4ytBK3S7h+O6bd/G8RQ29KS5rD7wRg8mvOmCK5DqskqyFT1oF1rHY8TcSFGigwxQ1BZ2Tg+esrKTk8SgWssCLdghj6uCcAN9wfbQJHlFQLnMh1wFrrNKRRPLlMcY84fvs9ecZNv6yHc4ZcH5IDH9vf1Iyo/bhaPhc4H1p11ufaH0YQIh04yjqGxHzg/h2i6LUOK831IumdYU5evqLkI0eoJuxOas5Y0CeTsuAzcde2GCoY4iFVRBHYvvSvFWy5m/903KKhcJlsad7Xi8Hiltm6PQyp3eJbSu8LtGZFOH/cYF8ABID+EDIYCWTLIM=) 2025-04-01 19:16:34.069938 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKHDQFmQBa/Fh+dBK++dclKPjgg6DNlwInuwK6pT2XrgCbvKF5kDJplqg0VKjvIT+efJqAbopw/hMNTfuGnNZyo=) 2025-04-01 19:16:34.071362 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILlkp0YU4jXKvSFUa0lniXV7EO9wV3jzOHDy2xtkwJvI) 2025-04-01 19:16:34.072230 | orchestrator | 2025-04-01 19:16:34.072719 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:34.073508 | orchestrator | Tuesday 01 April 2025 19:16:34 +0000 (0:00:01.205) 0:00:26.608 ********* 2025-04-01 19:16:35.263107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjDaX9tFty0qT1MoiKRR7sfwKyV6sVF5hz4wwCCZqO2AHslu5dSUYB+waM+CN0K8lSmEmdQ+7aVZweZCQMJKo7OaUOkI04iHfkdayPDYBHXHSrCETDJpqTXjY/qS1ZGclp3DM5ICJ/1lJt12yZAu8cNQ1YnVanztmOjB+p6NZlPgj8++dnV2y2YJTjTU3ae8gV3ptBrkAzsqXENKnTI5NRS0etBqrk28Q96j5jvEHYr8ciQLaXsp7SGhWFeBI1TbjYHJUWj7dmnke3IC2HypeR/SU0US6Ap2ClMgLhzp8VKb2/bJcCEz3pQk2kRBz1blJwQ4447+9nCRhWlcyRhf61x3Qt99ZpmRLEQfqCgdTimSUxHJ6SZ6SMEAe4+xKXEHxacis8fpAvVgSc4QNQ660rVTNsUZCBXjx7jZfcVE7rFsDBDpuTmUe+sNMo6DH9fzhMZNEeSSVqqtuHc1uXAvVniIpzJ0htygz7/aLBaNOsoW3i79NL4MZixJ8LO962G/8=) 2025-04-01 19:16:35.263974 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO1jcZvMpJ5A+xfzYS7to7FZtCEEM41qEU7YCxR+N3t35C1SR7C1e7cEXRBL6lSpuIM+wP5hmPlT7yj5AjzycYs=) 2025-04-01 19:16:35.264656 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAA1jTRQ32sSt/bYh10l7QkovlwD98e3Ihj3E2IBiiIV) 2025-04-01 19:16:35.265777 | orchestrator | 2025-04-01 19:16:35.268147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-01 19:16:35.269178 | orchestrator | Tuesday 01 April 2025 19:16:35 +0000 (0:00:01.193) 0:00:27.801 ********* 2025-04-01 19:16:36.439146 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2kk+LVJ750CECRkxpP+Zd51VFP0ky1PYIMplOWctiUOtoxKu+HiWGouY9IGQ3eWQwGZsCHz3whAOuIoSs8BW3LJ/2HUv/U+vNGV6DUl7VtqLmanAI9pCsOuWs0xb3kPi4J74gQMtsFdUiavH3OCRvznnYthqbd557PdkOHQ5JkI5eczscFkLaUOrNDzGUq189GAotNjsF3hGIFs1BDFKLL9XFEhVxLHonOuWMS6bLOiOQfhBNuhY/4ZSYnawN5IIX7NRo3VC2hHH1dWeqVN9OOsk3oH688N7d4Ck+Cl+1+GbKoded9Xi5WNIApDb4YZ2VqMsizQAFf9KCMz2zj8gpR+9NjCVGmKqKjOoXlAi6ztfDR1hPGpYeoM8iUS9YjdX+DEYmtcDo8xytBDCWvLsqXOfbTncx01kKpmX4yvMjrr2iDDooC9ceKu+v+Gkv9FSa2Rahfyf9aYJy46ZASKKB/vAS+LtwhopaTHE90cqzPsG9S5eQvG+y8BAU9enm1Oc=) 2025-04-01 19:16:36.439702 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENAwkn1fJq01l6DXhTCX7BvFe1GnPaV6jRbHm/JY4kyAuXfcFusdeTTSUUYMrGAteaTKDAOHPazhwBv9TZN/LA=) 2025-04-01 19:16:36.440658 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII7nG7LRX9LmDHQByIPjKu8VbyX+e1D5XhDWHswwxsa1) 2025-04-01 19:16:36.441181 | orchestrator | 2025-04-01 19:16:36.441437 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-01 19:16:36.441898 | orchestrator | Tuesday 01 April 2025 19:16:36 +0000 (0:00:01.177) 0:00:28.979 ********* 2025-04-01 19:16:36.630800 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-01 19:16:36.632038 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-01 19:16:36.633014 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-01 19:16:36.634096 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-01 19:16:36.634903 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-01 19:16:36.635133 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-01 19:16:36.635760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-01 19:16:36.635862 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:16:36.636465 | orchestrator | 2025-04-01 19:16:36.636828 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-01 19:16:36.637623 | orchestrator | Tuesday 01 April 2025 19:16:36 +0000 (0:00:00.192) 0:00:29.171 ********* 2025-04-01 19:16:36.695911 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:16:36.696670 | orchestrator | 2025-04-01 19:16:36.697445 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-01 19:16:36.697776 | orchestrator | Tuesday 01 April 2025 19:16:36 +0000 (0:00:00.065) 0:00:29.237 ********* 2025-04-01 19:16:36.769455 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:16:36.769889 | orchestrator | 2025-04-01 19:16:36.771511 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-01 19:16:37.652926 | orchestrator | Tuesday 01 April 2025 19:16:36 +0000 (0:00:00.072) 0:00:29.309 ********* 2025-04-01 19:16:37.653040 | orchestrator | changed: [testbed-manager] 2025-04-01 19:16:37.653826 | orchestrator | 2025-04-01 19:16:37.653859 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:16:37.654123 | orchestrator | 2025-04-01 19:16:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:16:37.654150 | orchestrator | 2025-04-01 19:16:37 | INFO  | Please wait and do not abort execution. 2025-04-01 19:16:37.654170 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:16:37.654832 | orchestrator | 2025-04-01 19:16:37.655178 | orchestrator | Tuesday 01 April 2025 19:16:37 +0000 (0:00:00.879) 0:00:30.189 ********* 2025-04-01 19:16:37.656133 | orchestrator | =============================================================================== 2025-04-01 19:16:37.657137 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.03s 2025-04-01 19:16:37.663457 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.39s 2025-04-01 19:16:37.663575 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2025-04-01 19:16:37.663596 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2025-04-01 19:16:37.664725 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-04-01 19:16:37.665557 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-04-01 19:16:37.665958 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-04-01 19:16:37.667450 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-04-01 19:16:37.667764 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-04-01 19:16:37.668420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-04-01 19:16:37.668760 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-04-01 19:16:37.669268 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-04-01 19:16:37.669771 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-04-01 19:16:37.670730 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-04-01 19:16:37.671698 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-04-01 19:16:37.671726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-01 19:16:37.673614 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.88s 2025-04-01 19:16:37.674948 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.21s 2025-04-01 19:16:37.677309 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2025-04-01 19:16:38.190488 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2025-04-01 19:16:38.190585 | orchestrator | + osism apply squid 2025-04-01 19:16:39.763983 | orchestrator | 2025-04-01 19:16:39 | INFO  | Task 20689818-6ec8-4bec-aa52-ed8c377c8d60 (squid) was prepared for execution. 2025-04-01 19:16:43.241755 | orchestrator | 2025-04-01 19:16:39 | INFO  | It takes a moment until task 20689818-6ec8-4bec-aa52-ed8c377c8d60 (squid) has been started and output is visible here. 2025-04-01 19:16:43.241964 | orchestrator | 2025-04-01 19:16:43.242139 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-01 19:16:43.242675 | orchestrator | 2025-04-01 19:16:43.244845 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-01 19:16:43.367074 | orchestrator | Tuesday 01 April 2025 19:16:43 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-04-01 19:16:43.367117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-01 19:16:43.367986 | orchestrator | 2025-04-01 19:16:43.368341 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-01 19:16:43.370167 | orchestrator | Tuesday 01 April 2025 19:16:43 +0000 (0:00:00.125) 0:00:00.268 ********* 2025-04-01 19:16:45.031358 | orchestrator | ok: [testbed-manager] 2025-04-01 19:16:45.033411 | orchestrator | 2025-04-01 19:16:45.034365 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-01 19:16:45.036084 | orchestrator | Tuesday 01 April 2025 19:16:45 +0000 (0:00:01.663) 0:00:01.931 ********* 2025-04-01 19:16:46.306678 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-01 19:16:46.308055 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-01 19:16:46.308784 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-01 19:16:46.309695 | orchestrator | 2025-04-01 19:16:46.310366 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-01 19:16:46.311703 | orchestrator | Tuesday 01 April 2025 19:16:46 +0000 (0:00:01.276) 0:00:03.207 ********* 2025-04-01 19:16:47.491198 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-01 19:16:47.491610 | orchestrator | 2025-04-01 19:16:47.493456 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-01 19:16:47.495065 | orchestrator | Tuesday 01 April 2025 19:16:47 +0000 (0:00:01.182) 0:00:04.391 ********* 2025-04-01 19:16:47.875083 | orchestrator | ok: [testbed-manager] 2025-04-01 19:16:47.875532 | orchestrator | 2025-04-01 19:16:47.875563 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-01 19:16:47.876182 | orchestrator | Tuesday 01 April 2025 19:16:47 +0000 (0:00:00.386) 0:00:04.777 ********* 2025-04-01 19:16:48.983129 | orchestrator | changed: [testbed-manager] 2025-04-01 19:16:48.983533 | orchestrator | 2025-04-01 19:16:48.984483 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-01 19:16:48.985128 | orchestrator | Tuesday 01 April 2025 19:16:48 +0000 (0:00:01.107) 0:00:05.884 ********* 2025-04-01 19:17:22.215624 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-01 19:17:22.216234 | orchestrator | ok: [testbed-manager] 2025-04-01 19:17:22.216292 | orchestrator | 2025-04-01 19:17:22.217422 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-01 19:17:22.219707 | orchestrator | Tuesday 01 April 2025 19:17:22 +0000 (0:00:33.230) 0:00:39.115 ********* 2025-04-01 19:17:34.796075 | orchestrator | changed: [testbed-manager] 2025-04-01 19:18:34.879816 | orchestrator | 2025-04-01 19:18:34.879915 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-01 19:18:34.879935 | orchestrator | Tuesday 01 April 2025 19:17:34 +0000 (0:00:12.575) 0:00:51.690 ********* 2025-04-01 19:18:34.879963 | orchestrator | Pausing for 60 seconds 2025-04-01 19:18:34.949919 | orchestrator | changed: [testbed-manager] 2025-04-01 19:18:34.950013 | orchestrator | 2025-04-01 19:18:34.950092 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-01 19:18:34.950109 | orchestrator | Tuesday 01 April 2025 19:18:34 +0000 (0:01:00.085) 0:01:51.775 ********* 2025-04-01 19:18:34.950137 | orchestrator | ok: [testbed-manager] 2025-04-01 19:18:34.950545 | orchestrator | 2025-04-01 19:18:34.951338 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-01 19:18:34.952097 | orchestrator | Tuesday 01 April 2025 19:18:34 +0000 (0:00:00.077) 0:01:51.853 ********* 2025-04-01 19:18:35.557167 | orchestrator | changed: [testbed-manager] 2025-04-01 19:18:35.557559 | orchestrator | 2025-04-01 19:18:35.558538 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:18:35.558678 | orchestrator | 2025-04-01 19:18:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:18:35.559025 | orchestrator | 2025-04-01 19:18:35 | INFO  | Please wait and do not abort execution. 2025-04-01 19:18:35.559452 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:18:35.560200 | orchestrator | 2025-04-01 19:18:35.560610 | orchestrator | Tuesday 01 April 2025 19:18:35 +0000 (0:00:00.607) 0:01:52.460 ********* 2025-04-01 19:18:35.561010 | orchestrator | =============================================================================== 2025-04-01 19:18:35.561485 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-04-01 19:18:35.561870 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.23s 2025-04-01 19:18:35.562299 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.58s 2025-04-01 19:18:35.562698 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.66s 2025-04-01 19:18:35.563168 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.28s 2025-04-01 19:18:35.563468 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.18s 2025-04-01 19:18:35.563920 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.11s 2025-04-01 19:18:35.564094 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-04-01 19:18:35.564575 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-04-01 19:18:35.564981 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.13s 2025-04-01 19:18:35.565163 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-04-01 19:18:36.222939 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-01 19:18:36.227281 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-04-01 19:18:36.227325 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-01 19:18:36.286974 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-01 19:18:36.292889 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-01 19:18:36.292914 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-04-01 19:18:36.292934 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-01 19:18:36.299366 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-01 19:18:36.305736 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-01 19:18:37.917660 | orchestrator | 2025-04-01 19:18:37 | INFO  | Task 6a0f4295-403a-49cb-a8b3-9925a1ad643d (operator) was prepared for execution. 2025-04-01 19:18:41.440315 | orchestrator | 2025-04-01 19:18:37 | INFO  | It takes a moment until task 6a0f4295-403a-49cb-a8b3-9925a1ad643d (operator) has been started and output is visible here. 2025-04-01 19:18:41.440474 | orchestrator | 2025-04-01 19:18:41.441302 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-01 19:18:41.441889 | orchestrator | 2025-04-01 19:18:41.442992 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-01 19:18:41.446283 | orchestrator | Tuesday 01 April 2025 19:18:41 +0000 (0:00:00.101) 0:00:00.101 ********* 2025-04-01 19:18:45.061171 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:18:45.062125 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:18:45.063803 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:18:45.064430 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:18:45.065245 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:18:45.067372 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:18:45.068322 | orchestrator | 2025-04-01 19:18:45.068350 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-01 19:18:45.068373 | orchestrator | Tuesday 01 April 2025 19:18:45 +0000 (0:00:03.622) 0:00:03.723 ********* 2025-04-01 19:18:45.926315 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:18:45.926505 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:18:45.926528 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:18:45.927578 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:18:45.927767 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:18:45.928568 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:18:45.929125 | orchestrator | 2025-04-01 19:18:45.929608 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-01 19:18:45.930431 | orchestrator | 2025-04-01 19:18:45.933116 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-01 19:18:46.017393 | orchestrator | Tuesday 01 April 2025 19:18:45 +0000 (0:00:00.863) 0:00:04.587 ********* 2025-04-01 19:18:46.017497 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:18:46.056739 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:18:46.090390 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:18:46.148316 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:18:46.149505 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:18:46.149532 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:18:46.149547 | orchestrator | 2025-04-01 19:18:46.149562 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-01 19:18:46.149582 | orchestrator | Tuesday 01 April 2025 19:18:46 +0000 (0:00:00.223) 0:00:04.810 ********* 2025-04-01 19:18:46.216305 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:18:46.250230 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:18:46.270436 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:18:46.320503 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:18:46.321221 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:18:46.322378 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:18:46.325131 | orchestrator | 2025-04-01 19:18:46.327132 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-01 19:18:46.327236 | orchestrator | Tuesday 01 April 2025 19:18:46 +0000 (0:00:00.173) 0:00:04.984 ********* 2025-04-01 19:18:47.003607 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:18:47.004502 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:18:47.007193 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:18:47.007561 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:18:47.007589 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:18:47.007610 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:18:47.008373 | orchestrator | 2025-04-01 19:18:47.009083 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-01 19:18:47.009901 | orchestrator | Tuesday 01 April 2025 19:18:46 +0000 (0:00:00.681) 0:00:05.665 ********* 2025-04-01 19:18:47.943037 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:18:47.944036 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:18:47.947547 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:18:47.950207 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:18:47.950662 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:18:47.950692 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:18:47.950707 | orchestrator | 2025-04-01 19:18:47.950727 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-01 19:18:47.951068 | orchestrator | Tuesday 01 April 2025 19:18:47 +0000 (0:00:00.936) 0:00:06.602 ********* 2025-04-01 19:18:49.232842 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-01 19:18:49.233004 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-01 19:18:49.233019 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-01 19:18:49.233033 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-01 19:18:49.233042 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-01 19:18:49.233056 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-01 19:18:49.233595 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-01 19:18:49.234084 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-01 19:18:49.234480 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-01 19:18:49.235187 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-01 19:18:49.235578 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-01 19:18:49.239290 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-01 19:18:49.239710 | orchestrator | 2025-04-01 19:18:49.240283 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-01 19:18:49.240813 | orchestrator | Tuesday 01 April 2025 19:18:49 +0000 (0:00:01.292) 0:00:07.894 ********* 2025-04-01 19:18:50.678529 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:18:50.679122 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:18:50.679573 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:18:50.680664 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:18:50.680926 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:18:50.681627 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:18:50.682900 | orchestrator | 2025-04-01 19:18:50.683371 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-01 19:18:50.683939 | orchestrator | Tuesday 01 April 2025 19:18:50 +0000 (0:00:01.446) 0:00:09.341 ********* 2025-04-01 19:18:52.057795 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-01 19:18:52.320921 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-01 19:18:52.321032 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-01 19:18:52.321076 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 19:18:52.322201 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 19:18:52.324592 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 19:18:52.326012 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 19:18:52.328448 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 19:18:52.328885 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-01 19:18:52.330227 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-01 19:18:52.332604 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-01 19:18:52.332954 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-01 19:18:52.337790 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-01 19:18:52.338610 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-01 19:18:52.340300 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-01 19:18:52.341388 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-01 19:18:52.342134 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-01 19:18:52.343489 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-01 19:18:52.345496 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-01 19:18:52.346545 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-01 19:18:52.347679 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-01 19:18:52.349422 | orchestrator | 2025-04-01 19:18:52.350204 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-01 19:18:52.351663 | orchestrator | Tuesday 01 April 2025 19:18:52 +0000 (0:00:01.642) 0:00:10.984 ********* 2025-04-01 19:18:53.078205 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:18:53.079044 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:18:53.079206 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:18:53.080652 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:18:53.083370 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:18:53.083404 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:18:53.165654 | orchestrator | 2025-04-01 19:18:53.165719 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-01 19:18:53.165737 | orchestrator | Tuesday 01 April 2025 19:18:53 +0000 (0:00:00.753) 0:00:11.738 ********* 2025-04-01 19:18:53.165763 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:18:53.200441 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:18:53.228435 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:18:53.290870 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:18:53.292234 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:18:53.293194 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:18:53.294102 | orchestrator | 2025-04-01 19:18:53.295519 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-01 19:18:53.296352 | orchestrator | Tuesday 01 April 2025 19:18:53 +0000 (0:00:00.215) 0:00:11.953 ********* 2025-04-01 19:18:54.268563 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-01 19:18:54.271389 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:18:54.271544 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-01 19:18:54.271573 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:18:54.272709 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 19:18:54.273532 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:18:54.274652 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-01 19:18:54.275453 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-01 19:18:54.276438 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:18:54.276840 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:18:54.277374 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-01 19:18:54.278785 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:18:54.280275 | orchestrator | 2025-04-01 19:18:54.281276 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-01 19:18:54.282093 | orchestrator | Tuesday 01 April 2025 19:18:54 +0000 (0:00:00.975) 0:00:12.929 ********* 2025-04-01 19:18:54.320226 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:18:54.348393 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:18:54.374539 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:18:54.400792 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:18:54.438529 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:18:54.440500 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:18:54.441338 | orchestrator | 2025-04-01 19:18:54.441375 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-01 19:18:54.441456 | orchestrator | Tuesday 01 April 2025 19:18:54 +0000 (0:00:00.170) 0:00:13.100 ********* 2025-04-01 19:18:54.493638 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:18:54.550283 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:18:54.599342 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:18:54.635528 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:18:54.635642 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:18:54.637479 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:18:54.641773 | orchestrator | 2025-04-01 19:18:54.642495 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-01 19:18:54.643664 | orchestrator | Tuesday 01 April 2025 19:18:54 +0000 (0:00:00.198) 0:00:13.299 ********* 2025-04-01 19:18:54.682819 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:18:54.705582 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:18:54.765754 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:18:54.814180 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:18:54.815022 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:18:54.815390 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:18:54.816679 | orchestrator | 2025-04-01 19:18:54.819886 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-01 19:18:54.820444 | orchestrator | Tuesday 01 April 2025 19:18:54 +0000 (0:00:00.176) 0:00:13.476 ********* 2025-04-01 19:18:55.639736 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:18:55.640504 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:18:55.641762 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:18:55.643376 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:18:55.644798 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:18:55.646125 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:18:55.646988 | orchestrator | 2025-04-01 19:18:55.647022 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-01 19:18:55.647181 | orchestrator | Tuesday 01 April 2025 19:18:55 +0000 (0:00:00.825) 0:00:14.301 ********* 2025-04-01 19:18:55.716948 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:18:55.742915 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:18:55.785544 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:18:55.914580 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:18:55.915495 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:18:55.917024 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:18:55.918544 | orchestrator | 2025-04-01 19:18:55.919762 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:18:55.920472 | orchestrator | 2025-04-01 19:18:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:18:55.921344 | orchestrator | 2025-04-01 19:18:55 | INFO  | Please wait and do not abort execution. 2025-04-01 19:18:55.922335 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:18:55.923304 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:18:55.924551 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:18:55.926370 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:18:55.927246 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:18:55.928007 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:18:55.928821 | orchestrator | 2025-04-01 19:18:55.929522 | orchestrator | Tuesday 01 April 2025 19:18:55 +0000 (0:00:00.275) 0:00:14.577 ********* 2025-04-01 19:18:55.930152 | orchestrator | =============================================================================== 2025-04-01 19:18:55.930791 | orchestrator | Gathering Facts --------------------------------------------------------- 3.62s 2025-04-01 19:18:55.931539 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.64s 2025-04-01 19:18:55.932066 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.45s 2025-04-01 19:18:55.932636 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2025-04-01 19:18:55.933358 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.98s 2025-04-01 19:18:55.933755 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.94s 2025-04-01 19:18:55.934324 | orchestrator | Do not require tty for all users ---------------------------------------- 0.86s 2025-04-01 19:18:55.934860 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.83s 2025-04-01 19:18:55.935301 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.75s 2025-04-01 19:18:55.935909 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2025-04-01 19:18:55.936208 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2025-04-01 19:18:55.936675 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.22s 2025-04-01 19:18:55.937132 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2025-04-01 19:18:55.937580 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2025-04-01 19:18:55.937965 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-04-01 19:18:55.938930 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-04-01 19:18:55.939662 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-04-01 19:18:56.481221 | orchestrator | + osism apply --environment custom facts 2025-04-01 19:18:57.951072 | orchestrator | 2025-04-01 19:18:57 | INFO  | Trying to run play facts in environment custom 2025-04-01 19:18:58.007513 | orchestrator | 2025-04-01 19:18:58 | INFO  | Task 47779c25-c5e7-4ea1-9bef-fc31e6811547 (facts) was prepared for execution. 2025-04-01 19:19:01.316660 | orchestrator | 2025-04-01 19:18:58 | INFO  | It takes a moment until task 47779c25-c5e7-4ea1-9bef-fc31e6811547 (facts) has been started and output is visible here. 2025-04-01 19:19:01.316763 | orchestrator | 2025-04-01 19:19:01.317143 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-01 19:19:01.318746 | orchestrator | 2025-04-01 19:19:01.319230 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-01 19:19:01.319586 | orchestrator | Tuesday 01 April 2025 19:19:01 +0000 (0:00:00.109) 0:00:00.109 ********* 2025-04-01 19:19:02.521848 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:03.602783 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:19:03.609523 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:19:03.610213 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:19:03.613124 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:19:03.613874 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:19:03.614650 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:19:03.615132 | orchestrator | 2025-04-01 19:19:03.615839 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-01 19:19:03.618794 | orchestrator | Tuesday 01 April 2025 19:19:03 +0000 (0:00:02.272) 0:00:02.382 ********* 2025-04-01 19:19:04.937697 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:06.005907 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:19:06.006277 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:19:06.009439 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:19:06.009939 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:19:06.010462 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:19:06.011105 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:19:06.011244 | orchestrator | 2025-04-01 19:19:06.012171 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-01 19:19:06.012631 | orchestrator | 2025-04-01 19:19:06.013162 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-01 19:19:06.013906 | orchestrator | Tuesday 01 April 2025 19:19:05 +0000 (0:00:02.417) 0:00:04.800 ********* 2025-04-01 19:19:06.121131 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:06.121298 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:06.121665 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:06.121771 | orchestrator | 2025-04-01 19:19:06.125922 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-01 19:19:06.128017 | orchestrator | Tuesday 01 April 2025 19:19:06 +0000 (0:00:00.119) 0:00:04.919 ********* 2025-04-01 19:19:06.282231 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:06.282454 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:06.282650 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:06.283088 | orchestrator | 2025-04-01 19:19:06.283569 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-01 19:19:06.284005 | orchestrator | Tuesday 01 April 2025 19:19:06 +0000 (0:00:00.159) 0:00:05.079 ********* 2025-04-01 19:19:06.437371 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:06.438889 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:06.439916 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:06.441190 | orchestrator | 2025-04-01 19:19:06.442772 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-01 19:19:06.443660 | orchestrator | Tuesday 01 April 2025 19:19:06 +0000 (0:00:00.153) 0:00:05.233 ********* 2025-04-01 19:19:06.589957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:19:06.591435 | orchestrator | 2025-04-01 19:19:06.593396 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-01 19:19:07.093714 | orchestrator | Tuesday 01 April 2025 19:19:06 +0000 (0:00:00.153) 0:00:05.387 ********* 2025-04-01 19:19:07.093825 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:07.094160 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:07.095527 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:07.096664 | orchestrator | 2025-04-01 19:19:07.097375 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-01 19:19:07.098604 | orchestrator | Tuesday 01 April 2025 19:19:07 +0000 (0:00:00.502) 0:00:05.889 ********* 2025-04-01 19:19:07.220923 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:19:07.224393 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:19:07.225398 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:19:07.225871 | orchestrator | 2025-04-01 19:19:07.226602 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-01 19:19:07.230122 | orchestrator | Tuesday 01 April 2025 19:19:07 +0000 (0:00:00.130) 0:00:06.020 ********* 2025-04-01 19:19:08.379466 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:19:08.380561 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:19:08.381466 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:19:08.382331 | orchestrator | 2025-04-01 19:19:08.383562 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-01 19:19:08.384917 | orchestrator | Tuesday 01 April 2025 19:19:08 +0000 (0:00:01.156) 0:00:07.176 ********* 2025-04-01 19:19:08.927911 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:08.929220 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:08.930396 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:08.931663 | orchestrator | 2025-04-01 19:19:08.932318 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-01 19:19:08.933341 | orchestrator | Tuesday 01 April 2025 19:19:08 +0000 (0:00:00.547) 0:00:07.723 ********* 2025-04-01 19:19:10.109731 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:19:10.117339 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:19:25.586462 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:19:25.586577 | orchestrator | 2025-04-01 19:19:25.586595 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-01 19:19:25.586611 | orchestrator | Tuesday 01 April 2025 19:19:10 +0000 (0:00:01.171) 0:00:08.895 ********* 2025-04-01 19:19:25.586642 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:19:25.636654 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:19:25.636694 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:19:25.636710 | orchestrator | 2025-04-01 19:19:25.636725 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-01 19:19:25.636740 | orchestrator | Tuesday 01 April 2025 19:19:25 +0000 (0:00:15.481) 0:00:24.376 ********* 2025-04-01 19:19:25.636783 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:19:25.680166 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:19:25.680671 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:19:25.682214 | orchestrator | 2025-04-01 19:19:25.682611 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-01 19:19:25.683607 | orchestrator | Tuesday 01 April 2025 19:19:25 +0000 (0:00:00.102) 0:00:24.479 ********* 2025-04-01 19:19:34.866306 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:19:34.866829 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:19:34.868231 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:19:34.869225 | orchestrator | 2025-04-01 19:19:34.871211 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-01 19:19:34.871862 | orchestrator | Tuesday 01 April 2025 19:19:34 +0000 (0:00:09.183) 0:00:33.662 ********* 2025-04-01 19:19:35.388994 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:35.390105 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:35.390950 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:35.391363 | orchestrator | 2025-04-01 19:19:35.392224 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-01 19:19:35.393004 | orchestrator | Tuesday 01 April 2025 19:19:35 +0000 (0:00:00.524) 0:00:34.186 ********* 2025-04-01 19:19:39.196159 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-01 19:19:39.197164 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-01 19:19:39.199667 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-01 19:19:39.202136 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-01 19:19:39.204078 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-01 19:19:39.204108 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-01 19:19:39.205025 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-01 19:19:39.206140 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-01 19:19:39.206659 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-01 19:19:39.207693 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-01 19:19:39.208212 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-01 19:19:39.208694 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-01 19:19:39.209017 | orchestrator | 2025-04-01 19:19:39.209584 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-01 19:19:39.210093 | orchestrator | Tuesday 01 April 2025 19:19:39 +0000 (0:00:03.806) 0:00:37.992 ********* 2025-04-01 19:19:40.332206 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:40.336714 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:40.336842 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:40.337802 | orchestrator | 2025-04-01 19:19:40.337839 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-01 19:19:40.341273 | orchestrator | 2025-04-01 19:19:40.344232 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-01 19:19:40.345137 | orchestrator | Tuesday 01 April 2025 19:19:40 +0000 (0:00:01.135) 0:00:39.128 ********* 2025-04-01 19:19:42.018004 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:19:45.407491 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:19:45.408351 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:19:45.410218 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:45.411181 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:45.411601 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:45.412285 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:45.413329 | orchestrator | 2025-04-01 19:19:45.413731 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:19:45.416569 | orchestrator | 2025-04-01 19:19:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:19:45.416722 | orchestrator | 2025-04-01 19:19:45 | INFO  | Please wait and do not abort execution. 2025-04-01 19:19:45.417558 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:19:45.418124 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:19:45.418638 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:19:45.419244 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:19:45.420080 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:19:45.420422 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:19:45.421976 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:19:45.423361 | orchestrator | 2025-04-01 19:19:45.423938 | orchestrator | Tuesday 01 April 2025 19:19:45 +0000 (0:00:05.077) 0:00:44.206 ********* 2025-04-01 19:19:45.424836 | orchestrator | =============================================================================== 2025-04-01 19:19:45.425504 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.48s 2025-04-01 19:19:45.426298 | orchestrator | Install required packages (Debian) -------------------------------------- 9.18s 2025-04-01 19:19:45.426588 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.08s 2025-04-01 19:19:45.427076 | orchestrator | Copy fact files --------------------------------------------------------- 3.81s 2025-04-01 19:19:45.427504 | orchestrator | Copy fact file ---------------------------------------------------------- 2.42s 2025-04-01 19:19:45.428161 | orchestrator | Create custom facts directory ------------------------------------------- 2.27s 2025-04-01 19:19:45.428937 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.17s 2025-04-01 19:19:45.429195 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.16s 2025-04-01 19:19:45.429559 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.14s 2025-04-01 19:19:45.430219 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.55s 2025-04-01 19:19:45.430887 | orchestrator | Create custom facts directory ------------------------------------------- 0.52s 2025-04-01 19:19:45.432199 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.50s 2025-04-01 19:19:45.432655 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-04-01 19:19:45.433386 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-04-01 19:19:45.434656 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.15s 2025-04-01 19:19:45.434793 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-04-01 19:19:45.435195 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-04-01 19:19:45.435552 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-04-01 19:19:45.956050 | orchestrator | + osism apply bootstrap 2025-04-01 19:19:47.459520 | orchestrator | 2025-04-01 19:19:47 | INFO  | Task 8554d704-f5f1-4062-b054-9afeb655ca47 (bootstrap) was prepared for execution. 2025-04-01 19:19:47.460225 | orchestrator | 2025-04-01 19:19:47 | INFO  | It takes a moment until task 8554d704-f5f1-4062-b054-9afeb655ca47 (bootstrap) has been started and output is visible here. 2025-04-01 19:19:50.909914 | orchestrator | 2025-04-01 19:19:50.910121 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-01 19:19:50.910146 | orchestrator | 2025-04-01 19:19:50.910170 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-01 19:19:50.910665 | orchestrator | Tuesday 01 April 2025 19:19:50 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-04-01 19:19:51.002109 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:51.040674 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:51.066696 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:51.107227 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:51.202635 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:19:51.203918 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:19:51.205081 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:19:51.205819 | orchestrator | 2025-04-01 19:19:51.206761 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-01 19:19:51.207340 | orchestrator | 2025-04-01 19:19:51.209009 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-01 19:19:51.213033 | orchestrator | Tuesday 01 April 2025 19:19:51 +0000 (0:00:00.294) 0:00:00.424 ********* 2025-04-01 19:19:55.367644 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:19:55.367887 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:19:55.368771 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:19:55.371372 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:55.372169 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:55.373293 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:55.374292 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:55.374927 | orchestrator | 2025-04-01 19:19:55.375657 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-01 19:19:55.376321 | orchestrator | 2025-04-01 19:19:55.377572 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-01 19:19:55.377913 | orchestrator | Tuesday 01 April 2025 19:19:55 +0000 (0:00:04.165) 0:00:04.589 ********* 2025-04-01 19:19:55.442297 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-01 19:19:55.481489 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-01 19:19:55.541308 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-01 19:19:55.541340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-01 19:19:55.541359 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-01 19:19:55.541441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:19:55.541463 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-01 19:19:55.541714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:19:55.542140 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-01 19:19:55.542498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:19:55.542689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:19:55.542896 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:19:55.543290 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-01 19:19:55.610002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:19:55.610586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:19:55.610632 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-01 19:19:55.610701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:19:55.610943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-01 19:19:55.611354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-01 19:19:55.611513 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:19:55.613630 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:19:55.613857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:19:55.613882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:19:55.897574 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:19:55.898117 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:19:55.899144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:19:55.901015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:19:55.901809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:19:55.903178 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-01 19:19:55.903959 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:19:55.915485 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:19:55.917468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:19:55.917604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:19:55.917685 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-01 19:19:55.917944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:19:55.918341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:19:55.918617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:19:55.918915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-01 19:19:55.919443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:19:55.919989 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-01 19:19:55.920496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:19:55.920954 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:19:55.921475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-01 19:19:55.921912 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:19:55.922459 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:19:55.922828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-01 19:19:55.923142 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-01 19:19:55.923380 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-01 19:19:55.923564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-01 19:19:55.923817 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-01 19:19:55.924140 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-01 19:19:55.924574 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-01 19:19:55.924835 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:19:55.925006 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-01 19:19:55.925384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-01 19:19:55.925600 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:19:55.925835 | orchestrator | 2025-04-01 19:19:55.926142 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-01 19:19:55.926343 | orchestrator | 2025-04-01 19:19:55.926541 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-04-01 19:19:55.926787 | orchestrator | Tuesday 01 April 2025 19:19:55 +0000 (0:00:00.528) 0:00:05.118 ********* 2025-04-01 19:19:55.980532 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:56.011299 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:56.039007 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:56.072117 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:56.139428 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:19:56.140416 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:19:56.141128 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:19:56.141722 | orchestrator | 2025-04-01 19:19:56.142931 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-01 19:19:56.143370 | orchestrator | Tuesday 01 April 2025 19:19:56 +0000 (0:00:00.242) 0:00:05.360 ********* 2025-04-01 19:19:57.569196 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:19:57.570137 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:57.570169 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:57.570191 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:19:57.573175 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:57.573491 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:19:57.574735 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:57.575562 | orchestrator | 2025-04-01 19:19:57.576364 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-01 19:19:57.577097 | orchestrator | Tuesday 01 April 2025 19:19:57 +0000 (0:00:01.427) 0:00:06.788 ********* 2025-04-01 19:19:58.925693 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:19:58.925874 | orchestrator | ok: [testbed-manager] 2025-04-01 19:19:58.926884 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:19:58.930525 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:19:58.931205 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:19:58.931817 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:19:58.932635 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:19:58.933144 | orchestrator | 2025-04-01 19:19:58.933630 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-01 19:19:58.934453 | orchestrator | Tuesday 01 April 2025 19:19:58 +0000 (0:00:01.358) 0:00:08.146 ********* 2025-04-01 19:19:59.241789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:19:59.242106 | orchestrator | 2025-04-01 19:19:59.246376 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-01 19:20:01.496402 | orchestrator | Tuesday 01 April 2025 19:19:59 +0000 (0:00:00.316) 0:00:08.462 ********* 2025-04-01 19:20:01.496528 | orchestrator | changed: [testbed-manager] 2025-04-01 19:20:01.496588 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:01.496917 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:01.497185 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:01.497498 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:01.498683 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:01.498853 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:01.500378 | orchestrator | 2025-04-01 19:20:01.501794 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-01 19:20:01.501821 | orchestrator | Tuesday 01 April 2025 19:20:01 +0000 (0:00:02.254) 0:00:10.717 ********* 2025-04-01 19:20:01.575808 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:01.779770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:01.780888 | orchestrator | 2025-04-01 19:20:01.781960 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-01 19:20:01.782327 | orchestrator | Tuesday 01 April 2025 19:20:01 +0000 (0:00:00.283) 0:00:11.001 ********* 2025-04-01 19:20:02.934717 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:02.934890 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:02.934915 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:02.935545 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:02.936208 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:02.937241 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:02.937472 | orchestrator | 2025-04-01 19:20:02.937500 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-01 19:20:02.939044 | orchestrator | Tuesday 01 April 2025 19:20:02 +0000 (0:00:01.154) 0:00:12.155 ********* 2025-04-01 19:20:03.000041 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:03.651652 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:03.652382 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:03.653474 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:03.655298 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:03.655976 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:03.656835 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:03.657672 | orchestrator | 2025-04-01 19:20:03.658319 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-01 19:20:03.658974 | orchestrator | Tuesday 01 April 2025 19:20:03 +0000 (0:00:00.715) 0:00:12.871 ********* 2025-04-01 19:20:03.763608 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:20:03.806076 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:20:03.837222 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:20:04.179441 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:20:04.181942 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:20:04.183077 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:20:04.184470 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:04.185906 | orchestrator | 2025-04-01 19:20:04.187664 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-01 19:20:04.188888 | orchestrator | Tuesday 01 April 2025 19:20:04 +0000 (0:00:00.529) 0:00:13.400 ********* 2025-04-01 19:20:04.262668 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:04.292516 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:20:04.315904 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:20:04.346525 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:20:04.423741 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:20:04.423889 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:20:04.424851 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:20:04.426287 | orchestrator | 2025-04-01 19:20:04.426602 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-01 19:20:04.427479 | orchestrator | Tuesday 01 April 2025 19:20:04 +0000 (0:00:00.244) 0:00:13.645 ********* 2025-04-01 19:20:04.776376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:04.776793 | orchestrator | 2025-04-01 19:20:04.777183 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-01 19:20:04.777841 | orchestrator | Tuesday 01 April 2025 19:20:04 +0000 (0:00:00.351) 0:00:13.997 ********* 2025-04-01 19:20:05.110345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:05.110556 | orchestrator | 2025-04-01 19:20:05.111466 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-01 19:20:05.111833 | orchestrator | Tuesday 01 April 2025 19:20:05 +0000 (0:00:00.334) 0:00:14.331 ********* 2025-04-01 19:20:06.643992 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:06.644784 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:06.645825 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:06.646971 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:06.649360 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:06.650117 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:06.650169 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:06.650550 | orchestrator | 2025-04-01 19:20:06.650921 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-01 19:20:06.651972 | orchestrator | Tuesday 01 April 2025 19:20:06 +0000 (0:00:01.531) 0:00:15.863 ********* 2025-04-01 19:20:06.722241 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:06.752165 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:20:06.787880 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:20:06.815178 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:20:06.872357 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:20:06.872919 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:20:06.874711 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:20:06.876047 | orchestrator | 2025-04-01 19:20:06.876816 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-01 19:20:06.878068 | orchestrator | Tuesday 01 April 2025 19:20:06 +0000 (0:00:00.230) 0:00:16.094 ********* 2025-04-01 19:20:07.547947 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:07.548104 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:07.548621 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:07.549700 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:07.550453 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:07.551169 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:07.552039 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:07.553324 | orchestrator | 2025-04-01 19:20:07.555474 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-01 19:20:07.632869 | orchestrator | Tuesday 01 April 2025 19:20:07 +0000 (0:00:00.674) 0:00:16.769 ********* 2025-04-01 19:20:07.632915 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:07.658729 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:20:07.687396 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:20:07.715167 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:20:07.788043 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:20:07.790206 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:20:07.791734 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:20:07.793154 | orchestrator | 2025-04-01 19:20:07.794337 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-01 19:20:07.796166 | orchestrator | Tuesday 01 April 2025 19:20:07 +0000 (0:00:00.240) 0:00:17.009 ********* 2025-04-01 19:20:08.391487 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:08.392097 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:08.393080 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:08.393957 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:08.395072 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:08.396122 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:08.396460 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:08.397486 | orchestrator | 2025-04-01 19:20:08.398107 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-01 19:20:08.398869 | orchestrator | Tuesday 01 April 2025 19:20:08 +0000 (0:00:00.603) 0:00:17.613 ********* 2025-04-01 19:20:09.768496 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:09.768854 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:09.771710 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:09.775453 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:09.775519 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:09.776479 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:09.777612 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:09.778706 | orchestrator | 2025-04-01 19:20:09.781825 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-01 19:20:09.782207 | orchestrator | Tuesday 01 April 2025 19:20:09 +0000 (0:00:01.374) 0:00:18.987 ********* 2025-04-01 19:20:11.108845 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:11.109080 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:11.109108 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:11.109128 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:11.109182 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:11.109676 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:11.110096 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:11.110993 | orchestrator | 2025-04-01 19:20:11.111415 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-01 19:20:11.111672 | orchestrator | Tuesday 01 April 2025 19:20:11 +0000 (0:00:01.338) 0:00:20.326 ********* 2025-04-01 19:20:11.464555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:11.465465 | orchestrator | 2025-04-01 19:20:11.466853 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-01 19:20:11.468235 | orchestrator | Tuesday 01 April 2025 19:20:11 +0000 (0:00:00.360) 0:00:20.686 ********* 2025-04-01 19:20:11.542645 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:13.167934 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:13.168752 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:13.168786 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:13.168804 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:13.168827 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:13.169382 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:13.169666 | orchestrator | 2025-04-01 19:20:13.174381 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-01 19:20:13.264181 | orchestrator | Tuesday 01 April 2025 19:20:13 +0000 (0:00:01.700) 0:00:22.387 ********* 2025-04-01 19:20:13.264293 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:13.299934 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:13.341351 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:13.385062 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:13.464584 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:13.464792 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:13.465995 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:13.466770 | orchestrator | 2025-04-01 19:20:13.467522 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-01 19:20:13.467980 | orchestrator | Tuesday 01 April 2025 19:20:13 +0000 (0:00:00.296) 0:00:22.684 ********* 2025-04-01 19:20:13.542912 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:13.571021 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:13.602626 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:13.632371 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:13.723596 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:13.724740 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:13.724777 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:13.725927 | orchestrator | 2025-04-01 19:20:13.726630 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-01 19:20:13.727412 | orchestrator | Tuesday 01 April 2025 19:20:13 +0000 (0:00:00.259) 0:00:22.943 ********* 2025-04-01 19:20:13.823162 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:13.856899 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:13.896518 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:13.929730 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:14.009500 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:14.011897 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:14.011943 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:14.012733 | orchestrator | 2025-04-01 19:20:14.013870 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-01 19:20:14.384405 | orchestrator | Tuesday 01 April 2025 19:20:14 +0000 (0:00:00.288) 0:00:23.231 ********* 2025-04-01 19:20:14.384516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:14.387263 | orchestrator | 2025-04-01 19:20:14.393071 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-01 19:20:15.080443 | orchestrator | Tuesday 01 April 2025 19:20:14 +0000 (0:00:00.371) 0:00:23.603 ********* 2025-04-01 19:20:15.080567 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:15.080853 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:15.081629 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:15.082721 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:15.083611 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:15.084240 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:15.084605 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:15.085307 | orchestrator | 2025-04-01 19:20:15.086355 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-01 19:20:15.087311 | orchestrator | Tuesday 01 April 2025 19:20:15 +0000 (0:00:00.699) 0:00:24.302 ********* 2025-04-01 19:20:15.171785 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:15.206045 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:20:15.231461 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:20:15.261451 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:20:15.350823 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:20:15.351109 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:20:15.352387 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:20:15.352839 | orchestrator | 2025-04-01 19:20:15.353585 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-01 19:20:15.356369 | orchestrator | Tuesday 01 April 2025 19:20:15 +0000 (0:00:00.270) 0:00:24.573 ********* 2025-04-01 19:20:16.636802 | orchestrator | changed: [testbed-manager] 2025-04-01 19:20:16.637281 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:16.639164 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:16.639273 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:16.639295 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:16.639574 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:16.640542 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:16.640832 | orchestrator | 2025-04-01 19:20:16.641489 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-01 19:20:16.641785 | orchestrator | Tuesday 01 April 2025 19:20:16 +0000 (0:00:01.284) 0:00:25.857 ********* 2025-04-01 19:20:17.299040 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:17.300377 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:17.301564 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:17.303401 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:17.303647 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:17.304692 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:17.305621 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:17.306163 | orchestrator | 2025-04-01 19:20:17.306905 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-01 19:20:17.307339 | orchestrator | Tuesday 01 April 2025 19:20:17 +0000 (0:00:00.660) 0:00:26.517 ********* 2025-04-01 19:20:18.605873 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:18.606326 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:18.608084 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:18.608891 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:18.610206 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:18.611160 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:18.611854 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:18.612542 | orchestrator | 2025-04-01 19:20:18.613059 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-01 19:20:18.613545 | orchestrator | Tuesday 01 April 2025 19:20:18 +0000 (0:00:01.306) 0:00:27.824 ********* 2025-04-01 19:20:33.676941 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:33.677424 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:33.677462 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:33.678360 | orchestrator | changed: [testbed-manager] 2025-04-01 19:20:33.682338 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:33.683979 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:33.684694 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:33.685373 | orchestrator | 2025-04-01 19:20:33.685879 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-01 19:20:33.687785 | orchestrator | Tuesday 01 April 2025 19:20:33 +0000 (0:00:15.067) 0:00:42.891 ********* 2025-04-01 19:20:33.734791 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:33.768806 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:33.820834 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:33.849407 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:33.933528 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:33.933985 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:33.935346 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:33.936692 | orchestrator | 2025-04-01 19:20:33.937669 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-01 19:20:34.014382 | orchestrator | Tuesday 01 April 2025 19:20:33 +0000 (0:00:00.263) 0:00:43.155 ********* 2025-04-01 19:20:34.014458 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:34.070009 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:34.107351 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:34.187830 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:34.188778 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:34.188809 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:34.189788 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:34.190190 | orchestrator | 2025-04-01 19:20:34.191131 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-01 19:20:34.191555 | orchestrator | Tuesday 01 April 2025 19:20:34 +0000 (0:00:00.255) 0:00:43.410 ********* 2025-04-01 19:20:34.286183 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:34.320300 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:34.360880 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:34.393283 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:34.464753 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:34.468542 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:34.468859 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:34.469472 | orchestrator | 2025-04-01 19:20:34.469865 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-01 19:20:34.470618 | orchestrator | Tuesday 01 April 2025 19:20:34 +0000 (0:00:00.276) 0:00:43.686 ********* 2025-04-01 19:20:34.822657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:34.822993 | orchestrator | 2025-04-01 19:20:34.823025 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-01 19:20:34.823371 | orchestrator | Tuesday 01 April 2025 19:20:34 +0000 (0:00:00.355) 0:00:44.041 ********* 2025-04-01 19:20:37.072012 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:37.072297 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:37.073889 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:37.074591 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:37.076392 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:37.077414 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:37.078100 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:37.079218 | orchestrator | 2025-04-01 19:20:37.079423 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-01 19:20:37.080201 | orchestrator | Tuesday 01 April 2025 19:20:37 +0000 (0:00:02.248) 0:00:46.290 ********* 2025-04-01 19:20:38.385137 | orchestrator | changed: [testbed-manager] 2025-04-01 19:20:38.385871 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:38.385961 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:38.386584 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:38.387355 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:38.387793 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:38.388268 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:38.388974 | orchestrator | 2025-04-01 19:20:38.389814 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-01 19:20:38.390884 | orchestrator | Tuesday 01 April 2025 19:20:38 +0000 (0:00:01.312) 0:00:47.602 ********* 2025-04-01 19:20:39.314855 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:39.315055 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:39.316126 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:39.316548 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:39.316916 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:39.317991 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:39.318319 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:39.318355 | orchestrator | 2025-04-01 19:20:39.318631 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-01 19:20:39.319150 | orchestrator | Tuesday 01 April 2025 19:20:39 +0000 (0:00:00.928) 0:00:48.531 ********* 2025-04-01 19:20:39.640193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:39.641652 | orchestrator | 2025-04-01 19:20:39.643019 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-01 19:20:39.644051 | orchestrator | Tuesday 01 April 2025 19:20:39 +0000 (0:00:00.326) 0:00:48.858 ********* 2025-04-01 19:20:40.696014 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:40.697052 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:40.697097 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:40.697113 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:40.697128 | orchestrator | changed: [testbed-manager] 2025-04-01 19:20:40.697151 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:40.697495 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:40.698414 | orchestrator | 2025-04-01 19:20:40.699353 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-01 19:20:40.700090 | orchestrator | Tuesday 01 April 2025 19:20:40 +0000 (0:00:01.053) 0:00:49.912 ********* 2025-04-01 19:20:40.804764 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:20:40.843769 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:20:40.877325 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:20:41.075627 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:20:41.076430 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:20:41.076472 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:20:41.076932 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:20:41.077298 | orchestrator | 2025-04-01 19:20:41.077761 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-01 19:20:41.078118 | orchestrator | Tuesday 01 April 2025 19:20:41 +0000 (0:00:00.385) 0:00:50.297 ********* 2025-04-01 19:20:54.601755 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:20:54.602003 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:20:54.602089 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:20:54.602112 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:20:54.603582 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:20:54.605529 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:20:54.607073 | orchestrator | changed: [testbed-manager] 2025-04-01 19:20:54.608326 | orchestrator | 2025-04-01 19:20:54.609501 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-01 19:20:54.610594 | orchestrator | Tuesday 01 April 2025 19:20:54 +0000 (0:00:13.520) 0:01:03.817 ********* 2025-04-01 19:20:55.691594 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:55.691810 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:55.692999 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:55.693615 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:55.696499 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:55.696786 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:55.696810 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:55.696825 | orchestrator | 2025-04-01 19:20:55.696845 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-01 19:20:55.697570 | orchestrator | Tuesday 01 April 2025 19:20:55 +0000 (0:00:01.091) 0:01:04.909 ********* 2025-04-01 19:20:56.707735 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:56.708702 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:56.709913 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:56.711641 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:56.712170 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:56.713737 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:56.714436 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:56.715077 | orchestrator | 2025-04-01 19:20:56.718450 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-01 19:20:56.795075 | orchestrator | Tuesday 01 April 2025 19:20:56 +0000 (0:00:01.017) 0:01:05.927 ********* 2025-04-01 19:20:56.795114 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:56.821153 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:56.857828 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:56.889797 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:56.954545 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:56.955456 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:56.956338 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:56.957224 | orchestrator | 2025-04-01 19:20:56.957552 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-01 19:20:56.958143 | orchestrator | Tuesday 01 April 2025 19:20:56 +0000 (0:00:00.248) 0:01:06.175 ********* 2025-04-01 19:20:57.046508 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:57.085078 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:57.118431 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:57.158907 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:57.243738 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:57.244934 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:57.245907 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:57.247490 | orchestrator | 2025-04-01 19:20:57.248199 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-01 19:20:57.249414 | orchestrator | Tuesday 01 April 2025 19:20:57 +0000 (0:00:00.289) 0:01:06.465 ********* 2025-04-01 19:20:57.600509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:20:57.600992 | orchestrator | 2025-04-01 19:20:57.601787 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-01 19:20:57.602953 | orchestrator | Tuesday 01 April 2025 19:20:57 +0000 (0:00:00.355) 0:01:06.820 ********* 2025-04-01 19:20:59.664219 | orchestrator | ok: [testbed-manager] 2025-04-01 19:20:59.665060 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:20:59.666895 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:20:59.666995 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:20:59.668975 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:20:59.670295 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:20:59.670656 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:20:59.672451 | orchestrator | 2025-04-01 19:20:59.673825 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-01 19:20:59.674901 | orchestrator | Tuesday 01 April 2025 19:20:59 +0000 (0:00:02.062) 0:01:08.883 ********* 2025-04-01 19:21:00.328723 | orchestrator | changed: [testbed-manager] 2025-04-01 19:21:00.329466 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:21:00.329512 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:21:00.330591 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:21:00.331926 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:21:00.331956 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:21:00.332818 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:21:00.333731 | orchestrator | 2025-04-01 19:21:00.334279 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-01 19:21:00.334924 | orchestrator | Tuesday 01 April 2025 19:21:00 +0000 (0:00:00.664) 0:01:09.548 ********* 2025-04-01 19:21:00.429160 | orchestrator | ok: [testbed-manager] 2025-04-01 19:21:00.465451 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:21:00.503279 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:21:00.542515 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:21:00.605842 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:21:00.607410 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:21:00.608853 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:21:00.612086 | orchestrator | 2025-04-01 19:21:00.612929 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-01 19:21:00.613789 | orchestrator | Tuesday 01 April 2025 19:21:00 +0000 (0:00:00.279) 0:01:09.827 ********* 2025-04-01 19:21:01.964398 | orchestrator | ok: [testbed-manager] 2025-04-01 19:21:01.964589 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:21:01.965076 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:21:01.965524 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:21:01.968781 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:21:01.969861 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:21:01.970868 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:21:01.971495 | orchestrator | 2025-04-01 19:21:01.972371 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-01 19:21:01.972705 | orchestrator | Tuesday 01 April 2025 19:21:01 +0000 (0:00:01.356) 0:01:11.184 ********* 2025-04-01 19:21:04.078090 | orchestrator | changed: [testbed-manager] 2025-04-01 19:21:04.078692 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:21:04.078743 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:21:04.079918 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:21:04.081847 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:21:04.082795 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:21:04.083582 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:21:04.083991 | orchestrator | 2025-04-01 19:21:04.085003 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-01 19:21:04.086334 | orchestrator | Tuesday 01 April 2025 19:21:04 +0000 (0:00:02.112) 0:01:13.296 ********* 2025-04-01 19:21:07.290965 | orchestrator | ok: [testbed-manager] 2025-04-01 19:21:07.291622 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:21:07.293375 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:21:07.294156 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:21:07.295022 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:21:07.295774 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:21:07.296539 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:21:07.297957 | orchestrator | 2025-04-01 19:21:07.298426 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-01 19:21:07.298938 | orchestrator | Tuesday 01 April 2025 19:21:07 +0000 (0:00:03.212) 0:01:16.509 ********* 2025-04-01 19:21:46.155388 | orchestrator | ok: [testbed-manager] 2025-04-01 19:21:46.160698 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:21:46.160737 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:21:46.165176 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:21:46.165449 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:21:46.165479 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:21:46.166529 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:21:46.167174 | orchestrator | 2025-04-01 19:21:46.167585 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-01 19:21:46.168024 | orchestrator | Tuesday 01 April 2025 19:21:46 +0000 (0:00:38.858) 0:01:55.367 ********* 2025-04-01 19:22:49.972287 | orchestrator | changed: [testbed-manager] 2025-04-01 19:22:49.972466 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:22:49.972495 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:22:49.974813 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:22:49.975438 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:22:49.975555 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:22:49.975889 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:22:49.976338 | orchestrator | 2025-04-01 19:22:49.976719 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-01 19:22:49.977316 | orchestrator | Tuesday 01 April 2025 19:22:49 +0000 (0:01:03.823) 0:02:59.191 ********* 2025-04-01 19:22:51.630342 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:22:51.630487 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:22:51.631377 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:22:51.632376 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:22:51.632728 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:22:51.633487 | orchestrator | ok: [testbed-manager] 2025-04-01 19:22:51.634141 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:22:51.634576 | orchestrator | 2025-04-01 19:22:51.635349 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-01 19:22:51.635664 | orchestrator | Tuesday 01 April 2025 19:22:51 +0000 (0:00:01.659) 0:03:00.851 ********* 2025-04-01 19:23:05.229882 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:05.230139 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:05.231066 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:05.231103 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:05.231120 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:05.231137 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:05.231162 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:05.232084 | orchestrator | 2025-04-01 19:23:05.233495 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-01 19:23:05.233936 | orchestrator | Tuesday 01 April 2025 19:23:05 +0000 (0:00:13.596) 0:03:14.447 ********* 2025-04-01 19:23:05.678148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-01 19:23:05.678396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-01 19:23:05.678790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-01 19:23:05.679536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-01 19:23:05.679933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-01 19:23:05.680483 | orchestrator | 2025-04-01 19:23:05.680903 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-01 19:23:05.681311 | orchestrator | Tuesday 01 April 2025 19:23:05 +0000 (0:00:00.450) 0:03:14.898 ********* 2025-04-01 19:23:05.721777 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-01 19:23:05.748223 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:05.786787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-01 19:23:05.788085 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-01 19:23:05.821739 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:23:05.822434 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-01 19:23:05.858827 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:23:05.897207 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:23:06.543706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-01 19:23:06.544746 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-01 19:23:06.545529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-01 19:23:06.546157 | orchestrator | 2025-04-01 19:23:06.547379 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-01 19:23:06.547686 | orchestrator | Tuesday 01 April 2025 19:23:06 +0000 (0:00:00.865) 0:03:15.763 ********* 2025-04-01 19:23:06.583712 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-01 19:23:06.644565 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-01 19:23:06.645200 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-01 19:23:06.645272 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-01 19:23:06.645692 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-01 19:23:06.646005 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-01 19:23:06.646826 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-01 19:23:06.647362 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-01 19:23:06.647495 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-01 19:23:06.648315 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-01 19:23:06.648703 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-01 19:23:06.649342 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-01 19:23:06.649476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-01 19:23:06.650108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-01 19:23:06.695519 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-01 19:23:06.695959 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:06.696008 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-01 19:23:06.696420 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-01 19:23:06.696800 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-01 19:23:06.697077 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-01 19:23:06.697548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-01 19:23:06.698076 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-01 19:23:06.698413 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-01 19:23:06.754890 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-01 19:23:06.757193 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:23:06.757225 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-01 19:23:06.757547 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-01 19:23:06.758845 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-01 19:23:06.760380 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-01 19:23:06.761054 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-01 19:23:06.762405 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-01 19:23:06.763893 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-01 19:23:06.764364 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-01 19:23:06.767455 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-01 19:23:06.814070 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-01 19:23:06.814134 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-01 19:23:06.814164 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:23:06.815036 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-01 19:23:06.815853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-01 19:23:06.817058 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-01 19:23:06.817957 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-01 19:23:06.818300 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-01 19:23:06.819284 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-01 19:23:06.859197 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:23:12.165663 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-01 19:23:12.166422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-01 19:23:12.166471 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-01 19:23:12.167681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-01 19:23:12.169093 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-01 19:23:12.169336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-01 19:23:12.170104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-01 19:23:12.170714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-01 19:23:12.171152 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-01 19:23:12.172315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-01 19:23:12.172657 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-01 19:23:12.173959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-01 19:23:12.174592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-01 19:23:12.174895 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-01 19:23:12.175643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-01 19:23:12.176332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-01 19:23:12.177193 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-01 19:23:12.177643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-01 19:23:12.178406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-01 19:23:12.178949 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-01 19:23:12.179303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-01 19:23:12.179926 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-01 19:23:12.180361 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-01 19:23:12.180995 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-01 19:23:12.182454 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-01 19:23:12.183228 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-01 19:23:12.183675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-01 19:23:12.185088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-01 19:23:12.185679 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-01 19:23:12.185964 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-01 19:23:12.186406 | orchestrator | 2025-04-01 19:23:12.186911 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-01 19:23:12.187572 | orchestrator | Tuesday 01 April 2025 19:23:12 +0000 (0:00:05.622) 0:03:21.385 ********* 2025-04-01 19:23:13.683528 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.689602 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.690096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.691295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.691639 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.692829 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.694640 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-01 19:23:13.696784 | orchestrator | 2025-04-01 19:23:13.696856 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-01 19:23:13.697320 | orchestrator | Tuesday 01 April 2025 19:23:13 +0000 (0:00:01.515) 0:03:22.900 ********* 2025-04-01 19:23:13.744549 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-01 19:23:13.778221 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:13.880228 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-01 19:23:13.881284 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-01 19:23:14.302083 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:23:14.303213 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:23:14.304577 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-01 19:23:14.305719 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:23:14.306973 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-01 19:23:14.307815 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-01 19:23:14.308735 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-01 19:23:14.309688 | orchestrator | 2025-04-01 19:23:14.310215 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-01 19:23:14.310709 | orchestrator | Tuesday 01 April 2025 19:23:14 +0000 (0:00:00.622) 0:03:23.523 ********* 2025-04-01 19:23:14.382431 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-01 19:23:14.416225 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:14.505854 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-01 19:23:14.966630 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-01 19:23:14.967705 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:23:14.968382 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:23:14.969772 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-01 19:23:14.970195 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:23:14.971268 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-01 19:23:14.971590 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-01 19:23:14.972338 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-01 19:23:14.972690 | orchestrator | 2025-04-01 19:23:14.973434 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-01 19:23:14.973946 | orchestrator | Tuesday 01 April 2025 19:23:14 +0000 (0:00:00.664) 0:03:24.187 ********* 2025-04-01 19:23:15.068025 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:15.103598 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:23:15.133797 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:23:15.162473 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:23:15.336404 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:23:15.337851 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:23:15.339425 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:23:15.339457 | orchestrator | 2025-04-01 19:23:15.340778 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-01 19:23:15.341267 | orchestrator | Tuesday 01 April 2025 19:23:15 +0000 (0:00:00.367) 0:03:24.555 ********* 2025-04-01 19:23:20.762106 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:20.763275 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:20.764686 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:20.765466 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:20.766277 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:20.767341 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:20.767650 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:20.768164 | orchestrator | 2025-04-01 19:23:20.768982 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-01 19:23:20.769410 | orchestrator | Tuesday 01 April 2025 19:23:20 +0000 (0:00:05.428) 0:03:29.983 ********* 2025-04-01 19:23:20.843806 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-01 19:23:20.891640 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:20.892623 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-01 19:23:20.893400 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-01 19:23:20.933568 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:23:20.986550 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:23:20.986698 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-01 19:23:20.987426 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-01 19:23:21.032586 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:23:21.104458 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:23:21.104886 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-01 19:23:21.105678 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:23:21.106302 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-01 19:23:21.106639 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:23:21.108104 | orchestrator | 2025-04-01 19:23:21.108525 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-01 19:23:21.109458 | orchestrator | Tuesday 01 April 2025 19:23:21 +0000 (0:00:00.342) 0:03:30.326 ********* 2025-04-01 19:23:22.251149 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-01 19:23:22.251370 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-01 19:23:22.253678 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-01 19:23:22.254171 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-01 19:23:22.255329 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-01 19:23:22.256309 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-01 19:23:22.256932 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-01 19:23:22.256963 | orchestrator | 2025-04-01 19:23:22.257950 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-01 19:23:22.258141 | orchestrator | Tuesday 01 April 2025 19:23:22 +0000 (0:00:01.143) 0:03:31.470 ********* 2025-04-01 19:23:22.927656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:23:22.928626 | orchestrator | 2025-04-01 19:23:22.929625 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-01 19:23:22.934359 | orchestrator | Tuesday 01 April 2025 19:23:22 +0000 (0:00:00.677) 0:03:32.147 ********* 2025-04-01 19:23:24.367810 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:24.368483 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:24.368514 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:24.368539 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:24.369296 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:24.370299 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:24.371383 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:24.371566 | orchestrator | 2025-04-01 19:23:24.372643 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-01 19:23:24.373526 | orchestrator | Tuesday 01 April 2025 19:23:24 +0000 (0:00:01.437) 0:03:33.584 ********* 2025-04-01 19:23:25.086970 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:25.087892 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:25.089523 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:25.090474 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:25.090502 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:25.091642 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:25.092562 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:25.092894 | orchestrator | 2025-04-01 19:23:25.093814 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-01 19:23:25.094325 | orchestrator | Tuesday 01 April 2025 19:23:25 +0000 (0:00:00.721) 0:03:34.306 ********* 2025-04-01 19:23:25.823580 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:25.823726 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:25.825271 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:25.826715 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:25.828052 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:25.829059 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:25.830877 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:25.832133 | orchestrator | 2025-04-01 19:23:25.832989 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-01 19:23:25.833707 | orchestrator | Tuesday 01 April 2025 19:23:25 +0000 (0:00:00.737) 0:03:35.044 ********* 2025-04-01 19:23:26.509574 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:26.510787 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:26.510827 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:26.511163 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:26.512448 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:26.513575 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:26.514305 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:26.515205 | orchestrator | 2025-04-01 19:23:26.516182 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-01 19:23:26.517075 | orchestrator | Tuesday 01 April 2025 19:23:26 +0000 (0:00:00.685) 0:03:35.729 ********* 2025-04-01 19:23:27.638603 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533635.6055973, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.639908 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533612.0378547, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.641828 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533616.4844365, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.643293 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533625.9981349, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.643328 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533619.2320402, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.644499 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533614.4199142, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.644672 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743533627.9605472, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.646904 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533658.9940462, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.647493 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533566.9164295, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.648601 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533561.8568525, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.649437 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533564.2089567, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.650261 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533572.5823455, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.650423 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533577.937702, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.651102 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743533567.9790323, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 19:23:27.652331 | orchestrator | 2025-04-01 19:23:27.652956 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-01 19:23:27.653743 | orchestrator | Tuesday 01 April 2025 19:23:27 +0000 (0:00:01.126) 0:03:36.856 ********* 2025-04-01 19:23:28.906013 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:28.906456 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:28.907552 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:28.908955 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:28.909733 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:28.910499 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:28.911583 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:28.913500 | orchestrator | 2025-04-01 19:23:28.913781 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-01 19:23:28.914430 | orchestrator | Tuesday 01 April 2025 19:23:28 +0000 (0:00:01.270) 0:03:38.126 ********* 2025-04-01 19:23:30.251185 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:30.251363 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:30.252296 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:30.252366 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:30.252707 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:30.253587 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:30.254313 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:30.254915 | orchestrator | 2025-04-01 19:23:30.254942 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-01 19:23:30.255143 | orchestrator | Tuesday 01 April 2025 19:23:30 +0000 (0:00:01.345) 0:03:39.471 ********* 2025-04-01 19:23:30.338208 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:23:30.381672 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:23:30.420539 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:23:30.457586 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:23:30.495392 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:23:30.562522 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:23:30.563358 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:23:30.563608 | orchestrator | 2025-04-01 19:23:30.564708 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-01 19:23:30.564787 | orchestrator | Tuesday 01 April 2025 19:23:30 +0000 (0:00:00.312) 0:03:39.784 ********* 2025-04-01 19:23:31.362609 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:31.362790 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:31.362830 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:31.362912 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:31.362942 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:31.364688 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:31.364729 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:31.364914 | orchestrator | 2025-04-01 19:23:31.365437 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-01 19:23:31.368411 | orchestrator | Tuesday 01 April 2025 19:23:31 +0000 (0:00:00.799) 0:03:40.583 ********* 2025-04-01 19:23:31.840646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:23:31.841175 | orchestrator | 2025-04-01 19:23:31.841768 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-01 19:23:31.842179 | orchestrator | Tuesday 01 April 2025 19:23:31 +0000 (0:00:00.478) 0:03:41.061 ********* 2025-04-01 19:23:40.281204 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:40.281447 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:40.281656 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:40.281681 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:40.281695 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:40.281714 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:40.281853 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:40.282796 | orchestrator | 2025-04-01 19:23:40.283753 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-01 19:23:40.283783 | orchestrator | Tuesday 01 April 2025 19:23:40 +0000 (0:00:08.439) 0:03:49.500 ********* 2025-04-01 19:23:41.461777 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:41.461936 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:41.461959 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:41.461979 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:41.462770 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:41.463581 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:41.464747 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:41.465148 | orchestrator | 2025-04-01 19:23:41.465821 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-01 19:23:41.466737 | orchestrator | Tuesday 01 April 2025 19:23:41 +0000 (0:00:01.177) 0:03:50.678 ********* 2025-04-01 19:23:42.539663 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:42.539994 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:42.541366 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:42.542398 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:42.544393 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:42.545671 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:42.546983 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:42.548375 | orchestrator | 2025-04-01 19:23:42.549388 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-01 19:23:42.550135 | orchestrator | Tuesday 01 April 2025 19:23:42 +0000 (0:00:01.080) 0:03:51.758 ********* 2025-04-01 19:23:43.023820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:23:43.024505 | orchestrator | 2025-04-01 19:23:43.025893 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-01 19:23:43.026719 | orchestrator | Tuesday 01 April 2025 19:23:43 +0000 (0:00:00.485) 0:03:52.244 ********* 2025-04-01 19:23:52.354787 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:52.354966 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:52.354988 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:52.355003 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:52.355022 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:52.355490 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:52.357110 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:52.358099 | orchestrator | 2025-04-01 19:23:52.359278 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-01 19:23:52.360038 | orchestrator | Tuesday 01 April 2025 19:23:52 +0000 (0:00:09.327) 0:04:01.572 ********* 2025-04-01 19:23:53.076767 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:53.077335 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:53.078357 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:53.079564 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:53.080312 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:53.080628 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:53.081378 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:53.082395 | orchestrator | 2025-04-01 19:23:53.082762 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-01 19:23:53.083052 | orchestrator | Tuesday 01 April 2025 19:23:53 +0000 (0:00:00.725) 0:04:02.298 ********* 2025-04-01 19:23:54.479581 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:54.479736 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:54.480547 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:54.481475 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:54.482488 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:54.482543 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:54.482812 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:54.483534 | orchestrator | 2025-04-01 19:23:54.483919 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-01 19:23:54.484303 | orchestrator | Tuesday 01 April 2025 19:23:54 +0000 (0:00:01.402) 0:04:03.700 ********* 2025-04-01 19:23:55.636556 | orchestrator | changed: [testbed-manager] 2025-04-01 19:23:55.639596 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:23:55.639685 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:23:55.639701 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:23:55.639712 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:23:55.639726 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:23:55.641267 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:23:55.642069 | orchestrator | 2025-04-01 19:23:55.642762 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-01 19:23:55.643532 | orchestrator | Tuesday 01 April 2025 19:23:55 +0000 (0:00:01.155) 0:04:04.855 ********* 2025-04-01 19:23:55.738701 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:55.782547 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:55.871548 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:55.915635 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:55.994082 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:55.994757 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:55.996072 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:55.996820 | orchestrator | 2025-04-01 19:23:55.997769 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-01 19:23:55.998286 | orchestrator | Tuesday 01 April 2025 19:23:55 +0000 (0:00:00.360) 0:04:05.216 ********* 2025-04-01 19:23:56.109043 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:56.152003 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:56.190334 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:56.249468 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:56.348971 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:56.349439 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:56.350192 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:56.350758 | orchestrator | 2025-04-01 19:23:56.351386 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-01 19:23:56.351624 | orchestrator | Tuesday 01 April 2025 19:23:56 +0000 (0:00:00.353) 0:04:05.569 ********* 2025-04-01 19:23:56.475225 | orchestrator | ok: [testbed-manager] 2025-04-01 19:23:56.513140 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:23:56.561441 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:23:56.598453 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:23:56.682358 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:23:56.682534 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:23:56.682602 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:23:56.684055 | orchestrator | 2025-04-01 19:23:56.684436 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-01 19:23:56.684844 | orchestrator | Tuesday 01 April 2025 19:23:56 +0000 (0:00:00.335) 0:04:05.905 ********* 2025-04-01 19:24:01.462861 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:24:01.464348 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:24:01.464847 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:24:01.465415 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:24:01.470157 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:24:01.470392 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:24:01.470413 | orchestrator | ok: [testbed-manager] 2025-04-01 19:24:01.470430 | orchestrator | 2025-04-01 19:24:01.470829 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-01 19:24:01.471336 | orchestrator | Tuesday 01 April 2025 19:24:01 +0000 (0:00:04.778) 0:04:10.684 ********* 2025-04-01 19:24:01.995024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:24:01.995135 | orchestrator | 2025-04-01 19:24:01.995815 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-01 19:24:01.998397 | orchestrator | Tuesday 01 April 2025 19:24:01 +0000 (0:00:00.528) 0:04:11.213 ********* 2025-04-01 19:24:02.076658 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.079274 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-01 19:24:02.128627 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.128650 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-01 19:24:02.128668 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:24:02.130714 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.132345 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-01 19:24:02.180406 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:24:02.232017 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.232045 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:24:02.232694 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-01 19:24:02.236409 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.272255 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-01 19:24:02.273387 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:24:02.274468 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.350336 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-01 19:24:02.351944 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:24:02.352876 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:24:02.356550 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-01 19:24:02.357253 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-01 19:24:02.357276 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:24:02.357295 | orchestrator | 2025-04-01 19:24:02.358115 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-01 19:24:02.358951 | orchestrator | Tuesday 01 April 2025 19:24:02 +0000 (0:00:00.360) 0:04:11.573 ********* 2025-04-01 19:24:02.851543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:24:02.852495 | orchestrator | 2025-04-01 19:24:02.852521 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-01 19:24:02.853037 | orchestrator | Tuesday 01 April 2025 19:24:02 +0000 (0:00:00.499) 0:04:12.073 ********* 2025-04-01 19:24:02.939606 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-01 19:24:02.940428 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-01 19:24:02.982126 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:24:03.027807 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-01 19:24:03.027845 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:24:03.030361 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-01 19:24:03.071449 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:24:03.072087 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-01 19:24:03.121436 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:24:03.121944 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-01 19:24:03.212728 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:24:03.213756 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:24:03.214831 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-01 19:24:03.216112 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:24:03.216992 | orchestrator | 2025-04-01 19:24:03.218426 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-01 19:24:03.219288 | orchestrator | Tuesday 01 April 2025 19:24:03 +0000 (0:00:00.360) 0:04:12.433 ********* 2025-04-01 19:24:03.749836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:24:03.749956 | orchestrator | 2025-04-01 19:24:03.750856 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-01 19:24:36.085529 | orchestrator | Tuesday 01 April 2025 19:24:03 +0000 (0:00:00.536) 0:04:12.969 ********* 2025-04-01 19:24:36.085669 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:24:36.085737 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:24:36.085758 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:24:36.086161 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:24:36.086724 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:24:36.087409 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:24:36.087802 | orchestrator | changed: [testbed-manager] 2025-04-01 19:24:36.088226 | orchestrator | 2025-04-01 19:24:36.089075 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-01 19:24:36.091150 | orchestrator | Tuesday 01 April 2025 19:24:36 +0000 (0:00:32.335) 0:04:45.305 ********* 2025-04-01 19:24:44.339842 | orchestrator | changed: [testbed-manager] 2025-04-01 19:24:44.340014 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:24:44.340042 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:24:44.340502 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:24:44.341190 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:24:44.342484 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:24:44.342953 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:24:44.343691 | orchestrator | 2025-04-01 19:24:44.344680 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-01 19:24:44.345311 | orchestrator | Tuesday 01 April 2025 19:24:44 +0000 (0:00:08.253) 0:04:53.558 ********* 2025-04-01 19:24:52.551130 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:24:52.551341 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:24:52.551374 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:24:52.551634 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:24:52.552138 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:24:52.552343 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:24:52.552633 | orchestrator | changed: [testbed-manager] 2025-04-01 19:24:52.552933 | orchestrator | 2025-04-01 19:24:52.553780 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-01 19:24:54.162550 | orchestrator | Tuesday 01 April 2025 19:24:52 +0000 (0:00:08.214) 0:05:01.773 ********* 2025-04-01 19:24:54.162673 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:24:54.162858 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:24:54.163912 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:24:54.164463 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:24:54.165342 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:24:54.165971 | orchestrator | ok: [testbed-manager] 2025-04-01 19:24:54.166388 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:24:54.167373 | orchestrator | 2025-04-01 19:24:54.167884 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-01 19:24:54.168118 | orchestrator | Tuesday 01 April 2025 19:24:54 +0000 (0:00:01.609) 0:05:03.382 ********* 2025-04-01 19:25:00.066642 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:00.068203 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:00.068274 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:00.068936 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:00.068969 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:00.070529 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:00.071175 | orchestrator | changed: [testbed-manager] 2025-04-01 19:25:00.071570 | orchestrator | 2025-04-01 19:25:00.072520 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-01 19:25:00.073179 | orchestrator | Tuesday 01 April 2025 19:25:00 +0000 (0:00:05.904) 0:05:09.287 ********* 2025-04-01 19:25:00.677658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:25:00.678226 | orchestrator | 2025-04-01 19:25:00.679366 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-01 19:25:00.680187 | orchestrator | Tuesday 01 April 2025 19:25:00 +0000 (0:00:00.611) 0:05:09.899 ********* 2025-04-01 19:25:01.478988 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:01.479121 | orchestrator | changed: [testbed-manager] 2025-04-01 19:25:01.482563 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:01.483072 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:01.483088 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:01.483098 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:01.483108 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:01.483118 | orchestrator | 2025-04-01 19:25:01.483128 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-01 19:25:01.483142 | orchestrator | Tuesday 01 April 2025 19:25:01 +0000 (0:00:00.798) 0:05:10.697 ********* 2025-04-01 19:25:03.052520 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:25:03.234774 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:25:03.985855 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:25:03.985957 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:25:03.985974 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:25:03.986081 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:03.986101 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:25:03.986126 | orchestrator | 2025-04-01 19:25:03.986142 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-01 19:25:03.986158 | orchestrator | Tuesday 01 April 2025 19:25:03 +0000 (0:00:01.575) 0:05:12.273 ********* 2025-04-01 19:25:03.986187 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:03.986344 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:03.986781 | orchestrator | changed: [testbed-manager] 2025-04-01 19:25:03.986808 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:03.986828 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:03.987097 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:03.987324 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:03.988023 | orchestrator | 2025-04-01 19:25:03.988054 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-01 19:25:04.050845 | orchestrator | Tuesday 01 April 2025 19:25:03 +0000 (0:00:00.931) 0:05:13.205 ********* 2025-04-01 19:25:04.050919 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:25:04.121611 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:04.157844 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:04.193434 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:04.279738 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:04.280818 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:04.281879 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:04.282421 | orchestrator | 2025-04-01 19:25:04.282966 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-01 19:25:04.283399 | orchestrator | Tuesday 01 April 2025 19:25:04 +0000 (0:00:00.295) 0:05:13.500 ********* 2025-04-01 19:25:04.417645 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:25:04.463128 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:04.501697 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:04.545858 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:04.765645 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:04.766981 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:04.768273 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:04.769344 | orchestrator | 2025-04-01 19:25:04.770796 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-01 19:25:04.772001 | orchestrator | Tuesday 01 April 2025 19:25:04 +0000 (0:00:00.486) 0:05:13.987 ********* 2025-04-01 19:25:04.861788 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:04.903016 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:25:04.944570 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:25:04.987437 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:25:05.031996 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:25:05.111562 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:25:05.111834 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:25:05.112486 | orchestrator | 2025-04-01 19:25:05.112896 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-01 19:25:05.113375 | orchestrator | Tuesday 01 April 2025 19:25:05 +0000 (0:00:00.346) 0:05:14.333 ********* 2025-04-01 19:25:05.183667 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:25:05.240124 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:05.282665 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:05.325032 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:05.368167 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:05.446844 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:05.447554 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:05.447872 | orchestrator | 2025-04-01 19:25:05.448789 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-01 19:25:05.449358 | orchestrator | Tuesday 01 April 2025 19:25:05 +0000 (0:00:00.334) 0:05:14.668 ********* 2025-04-01 19:25:05.571514 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:05.603293 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:25:05.641186 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:25:05.700715 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:25:05.789067 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:25:05.790478 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:25:05.791336 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:25:05.794258 | orchestrator | 2025-04-01 19:25:05.888491 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-01 19:25:05.888524 | orchestrator | Tuesday 01 April 2025 19:25:05 +0000 (0:00:00.342) 0:05:15.011 ********* 2025-04-01 19:25:05.888544 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:25:05.943062 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:05.981098 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:06.016576 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:06.055996 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:06.140708 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:06.140993 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:06.142459 | orchestrator | 2025-04-01 19:25:06.143871 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-01 19:25:06.144430 | orchestrator | Tuesday 01 April 2025 19:25:06 +0000 (0:00:00.350) 0:05:15.361 ********* 2025-04-01 19:25:06.213970 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:25:06.250162 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:06.285183 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:06.324251 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:06.389292 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:06.586740 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:06.587841 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:06.587877 | orchestrator | 2025-04-01 19:25:06.587955 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-01 19:25:06.591617 | orchestrator | Tuesday 01 April 2025 19:25:06 +0000 (0:00:00.446) 0:05:15.808 ********* 2025-04-01 19:25:07.090999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:25:07.091209 | orchestrator | 2025-04-01 19:25:07.093375 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-01 19:25:07.093556 | orchestrator | Tuesday 01 April 2025 19:25:07 +0000 (0:00:00.504) 0:05:16.312 ********* 2025-04-01 19:25:08.162197 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:08.162631 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:25:08.163635 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:25:08.164348 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:25:08.165368 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:25:08.166277 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:25:08.167709 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:25:08.168460 | orchestrator | 2025-04-01 19:25:08.169840 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-01 19:25:08.170937 | orchestrator | Tuesday 01 April 2025 19:25:08 +0000 (0:00:01.068) 0:05:17.381 ********* 2025-04-01 19:25:11.154527 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:25:11.155148 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:25:11.156528 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:25:11.160543 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:25:11.161059 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:25:11.161211 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:25:11.161231 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:11.161281 | orchestrator | 2025-04-01 19:25:11.161318 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-01 19:25:11.161469 | orchestrator | Tuesday 01 April 2025 19:25:11 +0000 (0:00:02.994) 0:05:20.376 ********* 2025-04-01 19:25:11.223850 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-01 19:25:11.325020 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-01 19:25:11.326163 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-01 19:25:11.326414 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-01 19:25:11.328098 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-01 19:25:11.328897 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-01 19:25:11.415287 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:25:11.415641 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-01 19:25:11.415863 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-01 19:25:11.416417 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-01 19:25:11.525437 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:11.526353 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-01 19:25:11.526869 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-01 19:25:11.528620 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-01 19:25:11.623762 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:11.625204 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-01 19:25:11.628734 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-01 19:25:11.629332 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-01 19:25:11.698088 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:11.699416 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-01 19:25:11.862796 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:11.863161 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-01 19:25:11.864704 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-01 19:25:11.865795 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:11.866869 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-01 19:25:11.867751 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-01 19:25:11.869281 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-01 19:25:11.870445 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:11.870598 | orchestrator | 2025-04-01 19:25:11.870820 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-01 19:25:11.871683 | orchestrator | Tuesday 01 April 2025 19:25:11 +0000 (0:00:00.704) 0:05:21.080 ********* 2025-04-01 19:25:18.536852 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:18.537045 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:18.540437 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:18.541384 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:18.541408 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:18.541426 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:18.541446 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:18.544436 | orchestrator | 2025-04-01 19:25:18.545997 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-01 19:25:18.547193 | orchestrator | Tuesday 01 April 2025 19:25:18 +0000 (0:00:06.676) 0:05:27.757 ********* 2025-04-01 19:25:19.803667 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:19.803839 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:19.803864 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:19.804079 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:19.804447 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:19.805645 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:19.805671 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:19.806095 | orchestrator | 2025-04-01 19:25:19.806479 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-01 19:25:19.807596 | orchestrator | Tuesday 01 April 2025 19:25:19 +0000 (0:00:01.265) 0:05:29.022 ********* 2025-04-01 19:25:26.729143 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:26.729520 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:26.730448 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:26.730487 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:26.731199 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:26.731557 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:26.732263 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:26.732695 | orchestrator | 2025-04-01 19:25:26.732958 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-01 19:25:26.733444 | orchestrator | Tuesday 01 April 2025 19:25:26 +0000 (0:00:06.925) 0:05:35.947 ********* 2025-04-01 19:25:29.487094 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:29.487282 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:29.487301 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:29.488062 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:29.489069 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:29.489376 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:29.491696 | orchestrator | changed: [testbed-manager] 2025-04-01 19:25:29.492791 | orchestrator | 2025-04-01 19:25:29.492810 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-01 19:25:29.493429 | orchestrator | Tuesday 01 April 2025 19:25:29 +0000 (0:00:02.760) 0:05:38.707 ********* 2025-04-01 19:25:30.915149 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:30.915445 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:30.915509 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:30.916176 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:30.916619 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:30.916764 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:30.917309 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:30.920488 | orchestrator | 2025-04-01 19:25:30.921021 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-01 19:25:30.921364 | orchestrator | Tuesday 01 April 2025 19:25:30 +0000 (0:00:01.427) 0:05:40.135 ********* 2025-04-01 19:25:32.271734 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:32.272229 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:32.272295 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:32.273731 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:32.274300 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:32.275256 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:32.276213 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:32.278255 | orchestrator | 2025-04-01 19:25:32.278774 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-01 19:25:32.279552 | orchestrator | Tuesday 01 April 2025 19:25:32 +0000 (0:00:01.354) 0:05:41.489 ********* 2025-04-01 19:25:32.513606 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:25:32.601465 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:25:32.682273 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:25:32.766453 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:25:32.991756 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:25:32.992421 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:25:32.992764 | orchestrator | changed: [testbed-manager] 2025-04-01 19:25:32.993389 | orchestrator | 2025-04-01 19:25:32.994139 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-01 19:25:32.994845 | orchestrator | Tuesday 01 April 2025 19:25:32 +0000 (0:00:00.722) 0:05:42.211 ********* 2025-04-01 19:25:43.490776 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:43.490953 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:43.492088 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:43.494854 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:43.496937 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:43.496964 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:43.496980 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:43.497000 | orchestrator | 2025-04-01 19:25:43.497885 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-01 19:25:43.499045 | orchestrator | Tuesday 01 April 2025 19:25:43 +0000 (0:00:10.497) 0:05:52.708 ********* 2025-04-01 19:25:44.514088 | orchestrator | changed: [testbed-manager] 2025-04-01 19:25:44.514549 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:44.515460 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:44.515680 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:44.516828 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:44.518179 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:44.518805 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:44.519319 | orchestrator | 2025-04-01 19:25:44.519953 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-01 19:25:44.520519 | orchestrator | Tuesday 01 April 2025 19:25:44 +0000 (0:00:01.024) 0:05:53.733 ********* 2025-04-01 19:25:57.922764 | orchestrator | ok: [testbed-manager] 2025-04-01 19:25:57.922985 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:25:57.923530 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:25:57.923559 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:25:57.923576 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:25:57.923591 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:25:57.923606 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:25:57.923620 | orchestrator | 2025-04-01 19:25:57.923642 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-01 19:25:57.924284 | orchestrator | Tuesday 01 April 2025 19:25:57 +0000 (0:00:13.401) 0:06:07.135 ********* 2025-04-01 19:26:10.195204 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:10.195684 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:10.195721 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:10.195743 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:10.196384 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:10.197080 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:10.197733 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:10.199005 | orchestrator | 2025-04-01 19:26:10.202119 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-01 19:26:10.202767 | orchestrator | Tuesday 01 April 2025 19:26:10 +0000 (0:00:12.276) 0:06:19.411 ********* 2025-04-01 19:26:10.652087 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-01 19:26:10.773301 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-01 19:26:11.769920 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-01 19:26:11.771308 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-01 19:26:11.772284 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-01 19:26:11.774519 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-01 19:26:11.775158 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-01 19:26:11.775188 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-01 19:26:11.776197 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-01 19:26:11.777196 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-01 19:26:11.777997 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-01 19:26:11.778984 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-01 19:26:11.779772 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-01 19:26:11.780357 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-01 19:26:11.780834 | orchestrator | 2025-04-01 19:26:11.781532 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-01 19:26:11.782507 | orchestrator | Tuesday 01 April 2025 19:26:11 +0000 (0:00:01.574) 0:06:20.986 ********* 2025-04-01 19:26:11.947410 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:12.031719 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:12.106737 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:12.186516 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:12.266432 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:12.396837 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:12.397880 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:12.397929 | orchestrator | 2025-04-01 19:26:12.398006 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-01 19:26:12.398577 | orchestrator | Tuesday 01 April 2025 19:26:12 +0000 (0:00:00.631) 0:06:21.618 ********* 2025-04-01 19:26:16.459129 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:16.459318 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:16.459605 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:16.460061 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:16.460421 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:16.460560 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:16.460972 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:16.461265 | orchestrator | 2025-04-01 19:26:16.462933 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-01 19:26:16.463150 | orchestrator | Tuesday 01 April 2025 19:26:16 +0000 (0:00:04.062) 0:06:25.680 ********* 2025-04-01 19:26:16.616159 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:16.897792 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:16.971273 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:17.047461 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:17.126400 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:17.265921 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:17.266296 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:17.266962 | orchestrator | 2025-04-01 19:26:17.266998 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-01 19:26:17.267354 | orchestrator | Tuesday 01 April 2025 19:26:17 +0000 (0:00:00.806) 0:06:26.486 ********* 2025-04-01 19:26:17.341548 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-01 19:26:17.432939 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-01 19:26:17.433005 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:17.433063 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-01 19:26:17.433991 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-01 19:26:17.523148 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:17.524490 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-01 19:26:17.524521 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-01 19:26:17.605423 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:17.695870 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-01 19:26:17.695906 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-01 19:26:17.695929 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:17.696350 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-01 19:26:17.697730 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-01 19:26:17.779880 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:17.780503 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-01 19:26:17.781189 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-01 19:26:17.897403 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:17.898148 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-01 19:26:17.898179 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-01 19:26:17.898665 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:17.899059 | orchestrator | 2025-04-01 19:26:17.899386 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-01 19:26:17.900353 | orchestrator | Tuesday 01 April 2025 19:26:17 +0000 (0:00:00.631) 0:06:27.117 ********* 2025-04-01 19:26:18.059394 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:18.132180 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:18.226841 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:18.299470 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:18.380177 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:18.504419 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:18.504844 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:18.506661 | orchestrator | 2025-04-01 19:26:18.507174 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-01 19:26:18.507790 | orchestrator | Tuesday 01 April 2025 19:26:18 +0000 (0:00:00.607) 0:06:27.725 ********* 2025-04-01 19:26:18.656412 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:18.732542 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:18.799568 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:18.869361 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:18.961155 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:19.075777 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:19.077390 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:19.080105 | orchestrator | 2025-04-01 19:26:19.081763 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-01 19:26:19.082899 | orchestrator | Tuesday 01 April 2025 19:26:19 +0000 (0:00:00.568) 0:06:28.294 ********* 2025-04-01 19:26:19.229298 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:19.302117 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:19.369425 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:19.446414 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:19.515194 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:19.643581 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:19.644691 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:19.645720 | orchestrator | 2025-04-01 19:26:19.649460 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-01 19:26:26.877000 | orchestrator | Tuesday 01 April 2025 19:26:19 +0000 (0:00:00.569) 0:06:28.863 ********* 2025-04-01 19:26:26.877117 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:26.878773 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:26.880956 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:26.881411 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:26.881440 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:26.881460 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:26.882069 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:26.882551 | orchestrator | 2025-04-01 19:26:26.883546 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-01 19:26:26.883670 | orchestrator | Tuesday 01 April 2025 19:26:26 +0000 (0:00:07.234) 0:06:36.097 ********* 2025-04-01 19:26:27.844032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:26:27.845089 | orchestrator | 2025-04-01 19:26:27.845936 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-01 19:26:27.847416 | orchestrator | Tuesday 01 April 2025 19:26:27 +0000 (0:00:00.966) 0:06:37.064 ********* 2025-04-01 19:26:28.337547 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:28.770942 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:28.771054 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:28.771077 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:28.771370 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:28.771985 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:28.772444 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:28.772901 | orchestrator | 2025-04-01 19:26:28.773582 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-01 19:26:28.773959 | orchestrator | Tuesday 01 April 2025 19:26:28 +0000 (0:00:00.928) 0:06:37.992 ********* 2025-04-01 19:26:29.559762 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:30.010925 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:30.011410 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:30.012123 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:30.012607 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:30.013637 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:30.014378 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:30.014881 | orchestrator | 2025-04-01 19:26:30.015636 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-01 19:26:30.015777 | orchestrator | Tuesday 01 April 2025 19:26:30 +0000 (0:00:01.238) 0:06:39.230 ********* 2025-04-01 19:26:31.532697 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:31.533984 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:31.534074 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:31.534563 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:31.535102 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:31.538805 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:31.539057 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:31.539422 | orchestrator | 2025-04-01 19:26:31.539795 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-01 19:26:31.540070 | orchestrator | Tuesday 01 April 2025 19:26:31 +0000 (0:00:01.521) 0:06:40.751 ********* 2025-04-01 19:26:31.696522 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:33.236840 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:26:33.237008 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:26:33.238011 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:26:33.239413 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:33.239835 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:33.240490 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:33.240892 | orchestrator | 2025-04-01 19:26:33.241372 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-01 19:26:33.241751 | orchestrator | Tuesday 01 April 2025 19:26:33 +0000 (0:00:01.705) 0:06:42.457 ********* 2025-04-01 19:26:34.863028 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:34.864401 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:34.866564 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:34.867411 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:34.867445 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:34.868626 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:34.869850 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:34.871211 | orchestrator | 2025-04-01 19:26:34.872198 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-01 19:26:34.873272 | orchestrator | Tuesday 01 April 2025 19:26:34 +0000 (0:00:01.622) 0:06:44.080 ********* 2025-04-01 19:26:36.425017 | orchestrator | changed: [testbed-manager] 2025-04-01 19:26:36.425652 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:36.426125 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:36.427340 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:36.427790 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:36.429555 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:36.429968 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:36.430637 | orchestrator | 2025-04-01 19:26:36.432056 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-01 19:26:36.433506 | orchestrator | Tuesday 01 April 2025 19:26:36 +0000 (0:00:01.563) 0:06:45.644 ********* 2025-04-01 19:26:37.617553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:26:37.617687 | orchestrator | 2025-04-01 19:26:37.618222 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-01 19:26:37.618987 | orchestrator | Tuesday 01 April 2025 19:26:37 +0000 (0:00:01.192) 0:06:46.836 ********* 2025-04-01 19:26:39.277128 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:39.277350 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:26:39.277380 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:26:39.278293 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:26:39.278666 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:39.280989 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:39.281164 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:39.281194 | orchestrator | 2025-04-01 19:26:39.282112 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-01 19:26:40.759515 | orchestrator | Tuesday 01 April 2025 19:26:39 +0000 (0:00:01.661) 0:06:48.498 ********* 2025-04-01 19:26:40.759625 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:40.760927 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:26:40.764913 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:26:40.765270 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:26:40.765297 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:40.765316 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:40.765442 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:40.766062 | orchestrator | 2025-04-01 19:26:40.767852 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-01 19:26:42.167647 | orchestrator | Tuesday 01 April 2025 19:26:40 +0000 (0:00:01.471) 0:06:49.970 ********* 2025-04-01 19:26:42.167773 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:42.168520 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:26:42.170151 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:26:42.174116 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:26:42.174428 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:42.174456 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:42.174471 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:42.174485 | orchestrator | 2025-04-01 19:26:42.174507 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-01 19:26:42.176110 | orchestrator | Tuesday 01 April 2025 19:26:42 +0000 (0:00:01.415) 0:06:51.385 ********* 2025-04-01 19:26:43.847078 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:43.847625 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:26:43.848464 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:26:43.849099 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:26:43.850914 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:43.852387 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:43.852744 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:43.853563 | orchestrator | 2025-04-01 19:26:43.854501 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-01 19:26:43.854943 | orchestrator | Tuesday 01 April 2025 19:26:43 +0000 (0:00:01.680) 0:06:53.066 ********* 2025-04-01 19:26:45.447795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:26:45.448633 | orchestrator | 2025-04-01 19:26:45.449432 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.449702 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:01.198) 0:06:54.265 ********* 2025-04-01 19:26:45.450141 | orchestrator | 2025-04-01 19:26:45.450526 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.451672 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.070) 0:06:54.335 ********* 2025-04-01 19:26:45.452188 | orchestrator | 2025-04-01 19:26:45.452216 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.452237 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.081) 0:06:54.416 ********* 2025-04-01 19:26:45.452663 | orchestrator | 2025-04-01 19:26:45.452923 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.453629 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.058) 0:06:54.475 ********* 2025-04-01 19:26:45.454074 | orchestrator | 2025-04-01 19:26:45.454544 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.455020 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.039) 0:06:54.514 ********* 2025-04-01 19:26:45.455700 | orchestrator | 2025-04-01 19:26:45.456001 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.456459 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.048) 0:06:54.563 ********* 2025-04-01 19:26:45.456686 | orchestrator | 2025-04-01 19:26:45.457845 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-01 19:26:45.458102 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.042) 0:06:54.605 ********* 2025-04-01 19:26:45.458913 | orchestrator | 2025-04-01 19:26:45.459536 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-01 19:26:45.459900 | orchestrator | Tuesday 01 April 2025 19:26:45 +0000 (0:00:00.061) 0:06:54.667 ********* 2025-04-01 19:26:46.946131 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:46.947216 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:46.947915 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:46.948420 | orchestrator | 2025-04-01 19:26:46.948924 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-01 19:26:46.950799 | orchestrator | Tuesday 01 April 2025 19:26:46 +0000 (0:00:01.497) 0:06:56.164 ********* 2025-04-01 19:26:48.838153 | orchestrator | changed: [testbed-manager] 2025-04-01 19:26:48.838368 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:48.839517 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:48.840192 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:48.842267 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:48.843990 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:48.844380 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:48.844770 | orchestrator | 2025-04-01 19:26:48.845782 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-01 19:26:48.846718 | orchestrator | Tuesday 01 April 2025 19:26:48 +0000 (0:00:01.892) 0:06:58.057 ********* 2025-04-01 19:26:50.169981 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:50.171392 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:50.173551 | orchestrator | changed: [testbed-manager] 2025-04-01 19:26:50.173579 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:50.173727 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:50.173847 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:50.174595 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:50.174890 | orchestrator | 2025-04-01 19:26:50.175346 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-01 19:26:50.175598 | orchestrator | Tuesday 01 April 2025 19:26:50 +0000 (0:00:01.330) 0:06:59.387 ********* 2025-04-01 19:26:50.331433 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:52.207608 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:52.208392 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:52.209209 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:52.211070 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:52.212882 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:52.213737 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:52.214516 | orchestrator | 2025-04-01 19:26:52.214734 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-01 19:26:52.215378 | orchestrator | Tuesday 01 April 2025 19:26:52 +0000 (0:00:02.036) 0:07:01.423 ********* 2025-04-01 19:26:52.318334 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:52.318952 | orchestrator | 2025-04-01 19:26:52.320772 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-01 19:26:53.687968 | orchestrator | Tuesday 01 April 2025 19:26:52 +0000 (0:00:00.116) 0:07:01.540 ********* 2025-04-01 19:26:53.688067 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:53.689486 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:26:53.690520 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:26:53.692562 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:26:53.692990 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:26:53.694105 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:26:53.695070 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:26:53.696058 | orchestrator | 2025-04-01 19:26:53.696155 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-01 19:26:53.696868 | orchestrator | Tuesday 01 April 2025 19:26:53 +0000 (0:00:01.366) 0:07:02.906 ********* 2025-04-01 19:26:53.837795 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:53.911989 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:26:53.994481 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:26:54.301948 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:26:54.373001 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:26:54.506450 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:26:54.508716 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:26:54.512092 | orchestrator | 2025-04-01 19:26:54.513709 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-01 19:26:54.514791 | orchestrator | Tuesday 01 April 2025 19:26:54 +0000 (0:00:00.819) 0:07:03.726 ********* 2025-04-01 19:26:55.552493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:26:55.553548 | orchestrator | 2025-04-01 19:26:55.553624 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-01 19:26:55.554515 | orchestrator | Tuesday 01 April 2025 19:26:55 +0000 (0:00:01.046) 0:07:04.773 ********* 2025-04-01 19:26:56.046357 | orchestrator | ok: [testbed-manager] 2025-04-01 19:26:56.593764 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:26:56.593943 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:26:56.595549 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:26:56.596679 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:26:56.598238 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:26:56.599175 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:26:56.600116 | orchestrator | 2025-04-01 19:26:56.600930 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-01 19:26:56.601575 | orchestrator | Tuesday 01 April 2025 19:26:56 +0000 (0:00:01.039) 0:07:05.813 ********* 2025-04-01 19:26:59.696083 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-01 19:26:59.698480 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-01 19:26:59.698986 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-01 19:26:59.699020 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-01 19:26:59.700355 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-01 19:26:59.703175 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-01 19:26:59.704969 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-01 19:26:59.706700 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-01 19:26:59.707564 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-01 19:26:59.708431 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-01 19:26:59.709108 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-01 19:26:59.709826 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-01 19:26:59.710757 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-01 19:26:59.711541 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-01 19:26:59.711964 | orchestrator | 2025-04-01 19:26:59.712806 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-01 19:26:59.713400 | orchestrator | Tuesday 01 April 2025 19:26:59 +0000 (0:00:03.100) 0:07:08.913 ********* 2025-04-01 19:26:59.845964 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:26:59.936963 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:00.017885 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:00.104911 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:00.177106 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:00.302328 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:00.303104 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:00.303419 | orchestrator | 2025-04-01 19:27:00.303826 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-01 19:27:00.304203 | orchestrator | Tuesday 01 April 2025 19:27:00 +0000 (0:00:00.611) 0:07:09.525 ********* 2025-04-01 19:27:01.356060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:27:01.357021 | orchestrator | 2025-04-01 19:27:01.357777 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-01 19:27:01.358214 | orchestrator | Tuesday 01 April 2025 19:27:01 +0000 (0:00:01.049) 0:07:10.575 ********* 2025-04-01 19:27:01.856907 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:02.548848 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:02.549403 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:02.549798 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:02.550754 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:02.551386 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:02.552067 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:02.553237 | orchestrator | 2025-04-01 19:27:02.554108 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-01 19:27:02.554144 | orchestrator | Tuesday 01 April 2025 19:27:02 +0000 (0:00:01.194) 0:07:11.769 ********* 2025-04-01 19:27:03.042539 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:03.488078 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:03.488232 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:03.489205 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:03.490072 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:03.492646 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:03.493302 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:03.494105 | orchestrator | 2025-04-01 19:27:03.494482 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-01 19:27:03.495090 | orchestrator | Tuesday 01 April 2025 19:27:03 +0000 (0:00:00.937) 0:07:12.707 ********* 2025-04-01 19:27:03.643003 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:03.714009 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:03.784562 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:03.868292 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:03.940187 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:04.046546 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:04.046929 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:04.047418 | orchestrator | 2025-04-01 19:27:04.048031 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-01 19:27:04.048358 | orchestrator | Tuesday 01 April 2025 19:27:04 +0000 (0:00:00.560) 0:07:13.267 ********* 2025-04-01 19:27:05.701605 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:05.701797 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:05.702833 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:05.702914 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:05.703520 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:05.703552 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:05.704173 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:05.705602 | orchestrator | 2025-04-01 19:27:05.706368 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-01 19:27:05.706405 | orchestrator | Tuesday 01 April 2025 19:27:05 +0000 (0:00:01.655) 0:07:14.923 ********* 2025-04-01 19:27:05.825127 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:05.897395 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:05.974732 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:06.042929 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:06.106921 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:06.202537 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:06.203360 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:06.206804 | orchestrator | 2025-04-01 19:27:06.207181 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-01 19:27:06.207856 | orchestrator | Tuesday 01 April 2025 19:27:06 +0000 (0:00:00.499) 0:07:15.423 ********* 2025-04-01 19:27:08.540767 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:08.541313 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:08.541354 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:08.543496 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:08.544795 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:08.545180 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:08.545976 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:08.546921 | orchestrator | 2025-04-01 19:27:08.547430 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-01 19:27:08.548405 | orchestrator | Tuesday 01 April 2025 19:27:08 +0000 (0:00:02.336) 0:07:17.759 ********* 2025-04-01 19:27:09.750187 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:09.750382 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:09.750888 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:09.753440 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:09.755029 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:09.755494 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:09.755552 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:09.755610 | orchestrator | 2025-04-01 19:27:09.756164 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-01 19:27:09.756651 | orchestrator | Tuesday 01 April 2025 19:27:09 +0000 (0:00:01.213) 0:07:18.972 ********* 2025-04-01 19:27:11.684131 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:11.684611 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:11.684658 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:11.688105 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:11.688398 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:11.690727 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:11.690755 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:11.690770 | orchestrator | 2025-04-01 19:27:11.690792 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-01 19:27:11.692906 | orchestrator | Tuesday 01 April 2025 19:27:11 +0000 (0:00:01.929) 0:07:20.902 ********* 2025-04-01 19:27:13.697125 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:13.697718 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:13.698887 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:13.700344 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:13.700654 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:13.701639 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:13.702846 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:13.703990 | orchestrator | 2025-04-01 19:27:13.704954 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-01 19:27:13.705648 | orchestrator | Tuesday 01 April 2025 19:27:13 +0000 (0:00:02.013) 0:07:22.916 ********* 2025-04-01 19:27:14.345041 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:14.413050 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:14.918598 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:14.918806 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:14.919564 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:14.919891 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:14.924100 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:14.924453 | orchestrator | 2025-04-01 19:27:14.925076 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-01 19:27:14.925392 | orchestrator | Tuesday 01 April 2025 19:27:14 +0000 (0:00:01.221) 0:07:24.137 ********* 2025-04-01 19:27:15.069432 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:15.150310 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:15.222809 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:15.307523 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:15.383412 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:15.833507 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:15.834141 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:15.834625 | orchestrator | 2025-04-01 19:27:15.835285 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-01 19:27:15.836014 | orchestrator | Tuesday 01 April 2025 19:27:15 +0000 (0:00:00.915) 0:07:25.053 ********* 2025-04-01 19:27:16.023462 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:16.116802 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:16.201864 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:16.274425 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:16.348498 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:16.455725 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:16.455997 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:16.457397 | orchestrator | 2025-04-01 19:27:16.458425 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-01 19:27:16.458973 | orchestrator | Tuesday 01 April 2025 19:27:16 +0000 (0:00:00.623) 0:07:25.677 ********* 2025-04-01 19:27:16.597544 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:16.687903 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:16.759158 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:16.843896 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:16.918204 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:17.026625 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:17.027142 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:17.028246 | orchestrator | 2025-04-01 19:27:17.029961 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-01 19:27:17.031220 | orchestrator | Tuesday 01 April 2025 19:27:17 +0000 (0:00:00.567) 0:07:26.245 ********* 2025-04-01 19:27:17.430994 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:17.511807 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:17.599709 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:17.687360 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:17.769089 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:17.889931 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:17.890179 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:17.890583 | orchestrator | 2025-04-01 19:27:17.897554 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-01 19:27:18.046810 | orchestrator | Tuesday 01 April 2025 19:27:17 +0000 (0:00:00.862) 0:07:27.108 ********* 2025-04-01 19:27:18.046875 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:18.126180 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:18.205081 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:18.283729 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:18.361469 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:18.490401 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:18.490512 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:18.490535 | orchestrator | 2025-04-01 19:27:18.490906 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-01 19:27:18.490992 | orchestrator | Tuesday 01 April 2025 19:27:18 +0000 (0:00:00.604) 0:07:27.712 ********* 2025-04-01 19:27:23.217832 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:23.218085 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:23.218117 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:23.220746 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:23.221478 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:23.221508 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:23.221924 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:23.222451 | orchestrator | 2025-04-01 19:27:23.222915 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-01 19:27:23.223803 | orchestrator | Tuesday 01 April 2025 19:27:23 +0000 (0:00:04.723) 0:07:32.436 ********* 2025-04-01 19:27:23.382979 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:23.460464 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:23.534143 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:23.611239 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:23.676590 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:23.857176 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:23.857975 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:23.859042 | orchestrator | 2025-04-01 19:27:23.862427 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-01 19:27:25.077969 | orchestrator | Tuesday 01 April 2025 19:27:23 +0000 (0:00:00.638) 0:07:33.075 ********* 2025-04-01 19:27:25.078167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:27:25.079006 | orchestrator | 2025-04-01 19:27:25.080208 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-01 19:27:25.080307 | orchestrator | Tuesday 01 April 2025 19:27:25 +0000 (0:00:01.221) 0:07:34.297 ********* 2025-04-01 19:27:27.123031 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:27.124217 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:27.124305 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:27.124660 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:27.125964 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:27.126278 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:27.128362 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:27.128445 | orchestrator | 2025-04-01 19:27:27.128465 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-01 19:27:27.128485 | orchestrator | Tuesday 01 April 2025 19:27:27 +0000 (0:00:02.044) 0:07:36.341 ********* 2025-04-01 19:27:28.324749 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:28.325452 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:28.326857 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:28.328058 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:28.328912 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:28.330086 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:28.330781 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:28.332078 | orchestrator | 2025-04-01 19:27:28.333474 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-01 19:27:28.334353 | orchestrator | Tuesday 01 April 2025 19:27:28 +0000 (0:00:01.203) 0:07:37.544 ********* 2025-04-01 19:27:28.862170 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:29.267069 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:29.267757 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:29.268408 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:29.269479 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:29.270225 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:29.271185 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:29.272376 | orchestrator | 2025-04-01 19:27:29.273054 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-01 19:27:29.274314 | orchestrator | Tuesday 01 April 2025 19:27:29 +0000 (0:00:00.941) 0:07:38.485 ********* 2025-04-01 19:27:31.267432 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.268404 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.272662 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.273006 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.273034 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.273051 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.273066 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-01 19:27:31.273082 | orchestrator | 2025-04-01 19:27:31.273104 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-01 19:27:31.274005 | orchestrator | Tuesday 01 April 2025 19:27:31 +0000 (0:00:02.000) 0:07:40.486 ********* 2025-04-01 19:27:32.199639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:27:32.199895 | orchestrator | 2025-04-01 19:27:32.200691 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-01 19:27:32.201540 | orchestrator | Tuesday 01 April 2025 19:27:32 +0000 (0:00:00.933) 0:07:41.419 ********* 2025-04-01 19:27:41.966135 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:41.967069 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:41.967106 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:41.967122 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:41.967137 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:41.967159 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:41.968126 | orchestrator | changed: [testbed-manager] 2025-04-01 19:27:41.968158 | orchestrator | 2025-04-01 19:27:41.970642 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-01 19:27:41.972300 | orchestrator | Tuesday 01 April 2025 19:27:41 +0000 (0:00:09.760) 0:07:51.180 ********* 2025-04-01 19:27:43.986691 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:43.990492 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:43.990529 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:43.990588 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:43.990606 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:43.990621 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:43.990636 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:43.990654 | orchestrator | 2025-04-01 19:27:43.991599 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-01 19:27:43.992405 | orchestrator | Tuesday 01 April 2025 19:27:43 +0000 (0:00:02.023) 0:07:53.204 ********* 2025-04-01 19:27:45.567357 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:45.567568 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:45.568375 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:45.568886 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:45.570163 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:45.570984 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:45.571935 | orchestrator | 2025-04-01 19:27:45.574972 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-01 19:27:47.117656 | orchestrator | Tuesday 01 April 2025 19:27:45 +0000 (0:00:01.580) 0:07:54.785 ********* 2025-04-01 19:27:47.117774 | orchestrator | changed: [testbed-manager] 2025-04-01 19:27:47.118149 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:47.120611 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:47.121492 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:47.123747 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:47.126153 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:47.127169 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:47.127335 | orchestrator | 2025-04-01 19:27:47.128112 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-01 19:27:47.128570 | orchestrator | 2025-04-01 19:27:47.129201 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-01 19:27:47.131319 | orchestrator | Tuesday 01 April 2025 19:27:47 +0000 (0:00:01.553) 0:07:56.338 ********* 2025-04-01 19:27:47.265329 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:47.336300 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:47.439705 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:47.526090 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:47.596310 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:47.747091 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:47.747243 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:47.748485 | orchestrator | 2025-04-01 19:27:47.748661 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-01 19:27:47.749009 | orchestrator | 2025-04-01 19:27:47.752462 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-01 19:27:49.058609 | orchestrator | Tuesday 01 April 2025 19:27:47 +0000 (0:00:00.630) 0:07:56.969 ********* 2025-04-01 19:27:49.058775 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:49.058856 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:49.059348 | orchestrator | changed: [testbed-manager] 2025-04-01 19:27:49.059680 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:49.060477 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:49.064206 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:49.064680 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:49.064997 | orchestrator | 2025-04-01 19:27:49.065729 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-01 19:27:49.066181 | orchestrator | Tuesday 01 April 2025 19:27:49 +0000 (0:00:01.308) 0:07:58.277 ********* 2025-04-01 19:27:50.551636 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:50.551969 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:50.553225 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:50.554521 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:50.555011 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:50.555710 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:50.556181 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:50.556793 | orchestrator | 2025-04-01 19:27:50.556823 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-01 19:27:50.557106 | orchestrator | Tuesday 01 April 2025 19:27:50 +0000 (0:00:01.492) 0:07:59.770 ********* 2025-04-01 19:27:50.705421 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:27:51.000747 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:27:51.074588 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:27:51.160819 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:27:51.246691 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:27:51.682412 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:27:51.683870 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:27:51.690098 | orchestrator | 2025-04-01 19:27:53.169085 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-01 19:27:53.169215 | orchestrator | Tuesday 01 April 2025 19:27:51 +0000 (0:00:01.131) 0:08:00.901 ********* 2025-04-01 19:27:53.169253 | orchestrator | changed: [testbed-manager] 2025-04-01 19:27:53.169957 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:53.170908 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:53.172585 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:53.173652 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:53.173930 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:53.174956 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:53.175517 | orchestrator | 2025-04-01 19:27:53.176235 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-01 19:27:53.177101 | orchestrator | 2025-04-01 19:27:53.177164 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-01 19:27:53.177769 | orchestrator | Tuesday 01 April 2025 19:27:53 +0000 (0:00:01.486) 0:08:02.388 ********* 2025-04-01 19:27:54.281704 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:27:54.282518 | orchestrator | 2025-04-01 19:27:54.282966 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-01 19:27:54.285883 | orchestrator | Tuesday 01 April 2025 19:27:54 +0000 (0:00:01.112) 0:08:03.500 ********* 2025-04-01 19:27:54.761845 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:55.312625 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:55.313738 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:55.315015 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:55.315698 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:55.316742 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:55.317456 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:55.318479 | orchestrator | 2025-04-01 19:27:55.319911 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-01 19:27:55.321374 | orchestrator | Tuesday 01 April 2025 19:27:55 +0000 (0:00:01.032) 0:08:04.533 ********* 2025-04-01 19:27:56.677858 | orchestrator | changed: [testbed-manager] 2025-04-01 19:27:56.679240 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:56.679552 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:56.680822 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:27:56.681752 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:27:56.682759 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:27:56.683949 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:27:56.685057 | orchestrator | 2025-04-01 19:27:56.686304 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-01 19:27:56.686548 | orchestrator | Tuesday 01 April 2025 19:27:56 +0000 (0:00:01.363) 0:08:05.896 ********* 2025-04-01 19:27:57.950611 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:27:57.956614 | orchestrator | 2025-04-01 19:27:57.959909 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-01 19:27:58.431179 | orchestrator | Tuesday 01 April 2025 19:27:57 +0000 (0:00:01.270) 0:08:07.167 ********* 2025-04-01 19:27:58.431372 | orchestrator | ok: [testbed-manager] 2025-04-01 19:27:58.810369 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:27:58.811651 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:27:58.813332 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:27:58.814943 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:27:58.816514 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:27:58.817690 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:27:58.819002 | orchestrator | 2025-04-01 19:27:58.821526 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-01 19:27:58.822247 | orchestrator | Tuesday 01 April 2025 19:27:58 +0000 (0:00:00.865) 0:08:08.032 ********* 2025-04-01 19:27:59.305219 | orchestrator | changed: [testbed-manager] 2025-04-01 19:27:59.994655 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:27:59.995618 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:27:59.996454 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:28:00.000557 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:28:00.000882 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:28:00.002007 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:28:00.002799 | orchestrator | 2025-04-01 19:28:00.004035 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:28:00.004082 | orchestrator | 2025-04-01 19:28:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:28:00.005201 | orchestrator | 2025-04-01 19:28:00 | INFO  | Please wait and do not abort execution. 2025-04-01 19:28:00.005233 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-01 19:28:00.005618 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-01 19:28:00.006725 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-01 19:28:00.008021 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-01 19:28:00.008053 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-01 19:28:00.008438 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-01 19:28:00.009867 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-01 19:28:00.010793 | orchestrator | 2025-04-01 19:28:00.010828 | orchestrator | Tuesday 01 April 2025 19:27:59 +0000 (0:00:01.182) 0:08:09.215 ********* 2025-04-01 19:28:00.011449 | orchestrator | =============================================================================== 2025-04-01 19:28:00.012446 | orchestrator | osism.commons.packages : Install required packages --------------------- 63.82s 2025-04-01 19:28:00.013480 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.86s 2025-04-01 19:28:00.014964 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.34s 2025-04-01 19:28:00.015874 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.07s 2025-04-01 19:28:00.016779 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.60s 2025-04-01 19:28:00.017579 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.52s 2025-04-01 19:28:00.019303 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 13.40s 2025-04-01 19:28:00.019832 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.28s 2025-04-01 19:28:00.020575 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.50s 2025-04-01 19:28:00.021068 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.76s 2025-04-01 19:28:00.022395 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.33s 2025-04-01 19:28:00.022629 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.44s 2025-04-01 19:28:00.023494 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.25s 2025-04-01 19:28:00.024903 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.21s 2025-04-01 19:28:00.025281 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 7.23s 2025-04-01 19:28:00.025927 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.93s 2025-04-01 19:28:00.026117 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.68s 2025-04-01 19:28:00.026812 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.90s 2025-04-01 19:28:00.027434 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.62s 2025-04-01 19:28:00.028166 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.43s 2025-04-01 19:28:00.613812 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-01 19:28:02.989819 | orchestrator | + osism apply network 2025-04-01 19:28:02.989928 | orchestrator | 2025-04-01 19:28:02 | INFO  | Task 9ac639bd-7ad0-4f64-bd39-eb0f6fce8ea6 (network) was prepared for execution. 2025-04-01 19:28:06.937066 | orchestrator | 2025-04-01 19:28:02 | INFO  | It takes a moment until task 9ac639bd-7ad0-4f64-bd39-eb0f6fce8ea6 (network) has been started and output is visible here. 2025-04-01 19:28:06.937154 | orchestrator | 2025-04-01 19:28:06.938613 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-01 19:28:06.943569 | orchestrator | 2025-04-01 19:28:06.943981 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-01 19:28:06.944685 | orchestrator | Tuesday 01 April 2025 19:28:06 +0000 (0:00:00.245) 0:00:00.245 ********* 2025-04-01 19:28:07.093637 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:07.189686 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:07.276121 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:07.356886 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:07.460305 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:07.748819 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:07.749500 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:07.750975 | orchestrator | 2025-04-01 19:28:07.751800 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-01 19:28:07.752720 | orchestrator | Tuesday 01 April 2025 19:28:07 +0000 (0:00:00.817) 0:00:01.062 ********* 2025-04-01 19:28:09.102222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:28:09.102634 | orchestrator | 2025-04-01 19:28:09.102757 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-01 19:28:09.103129 | orchestrator | Tuesday 01 April 2025 19:28:09 +0000 (0:00:01.350) 0:00:02.413 ********* 2025-04-01 19:28:11.505481 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:11.505646 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:11.506304 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:11.506978 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:11.507302 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:11.507745 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:11.508251 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:11.509064 | orchestrator | 2025-04-01 19:28:11.509338 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-01 19:28:11.509754 | orchestrator | Tuesday 01 April 2025 19:28:11 +0000 (0:00:02.403) 0:00:04.817 ********* 2025-04-01 19:28:13.294154 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:13.294353 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:13.294922 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:13.296153 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:13.297002 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:13.297741 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:13.298786 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:13.299544 | orchestrator | 2025-04-01 19:28:13.300762 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-01 19:28:13.302574 | orchestrator | Tuesday 01 April 2025 19:28:13 +0000 (0:00:01.785) 0:00:06.603 ********* 2025-04-01 19:28:14.455957 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-01 19:28:14.460947 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-01 19:28:14.462595 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-01 19:28:14.463404 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-01 19:28:14.463821 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-01 19:28:14.464626 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-01 19:28:14.465177 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-01 19:28:14.465858 | orchestrator | 2025-04-01 19:28:14.466488 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-01 19:28:14.469383 | orchestrator | Tuesday 01 April 2025 19:28:14 +0000 (0:00:01.165) 0:00:07.768 ********* 2025-04-01 19:28:16.385913 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 19:28:16.386657 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-01 19:28:16.386697 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-01 19:28:16.387600 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-01 19:28:16.387909 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:28:16.388698 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-01 19:28:16.389028 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-01 19:28:16.389762 | orchestrator | 2025-04-01 19:28:16.390305 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-01 19:28:16.390752 | orchestrator | Tuesday 01 April 2025 19:28:16 +0000 (0:00:01.933) 0:00:09.701 ********* 2025-04-01 19:28:18.125118 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:18.125479 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:28:18.128988 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:28:18.129371 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:28:18.129399 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:28:18.129415 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:28:18.129430 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:28:18.129450 | orchestrator | 2025-04-01 19:28:18.129936 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-01 19:28:18.130160 | orchestrator | Tuesday 01 April 2025 19:28:18 +0000 (0:00:01.734) 0:00:11.436 ********* 2025-04-01 19:28:18.781950 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 19:28:19.296934 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:28:19.297549 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-01 19:28:19.299253 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-01 19:28:19.300823 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-01 19:28:19.301910 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-01 19:28:19.303157 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-01 19:28:19.303476 | orchestrator | 2025-04-01 19:28:19.304485 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-01 19:28:19.305571 | orchestrator | Tuesday 01 April 2025 19:28:19 +0000 (0:00:01.176) 0:00:12.612 ********* 2025-04-01 19:28:19.806307 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:19.901526 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:20.573665 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:20.574307 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:20.575969 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:20.576652 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:20.576683 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:20.577425 | orchestrator | 2025-04-01 19:28:20.577605 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-01 19:28:20.578338 | orchestrator | Tuesday 01 April 2025 19:28:20 +0000 (0:00:01.272) 0:00:13.885 ********* 2025-04-01 19:28:20.754827 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:28:20.863325 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:28:20.955642 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:28:21.048657 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:28:21.147111 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:28:21.520625 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:28:21.521185 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:28:21.521612 | orchestrator | 2025-04-01 19:28:21.522394 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-01 19:28:21.522815 | orchestrator | Tuesday 01 April 2025 19:28:21 +0000 (0:00:00.948) 0:00:14.833 ********* 2025-04-01 19:28:23.604961 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:23.605797 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:23.606687 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:23.607363 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:23.608215 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:23.610605 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:23.610733 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:23.611628 | orchestrator | 2025-04-01 19:28:23.612797 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-01 19:28:23.613629 | orchestrator | Tuesday 01 April 2025 19:28:23 +0000 (0:00:02.086) 0:00:16.920 ********* 2025-04-01 19:28:24.470175 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-01 19:28:25.727707 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.730109 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.732439 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.733325 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.734167 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.735081 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.736527 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-01 19:28:25.737536 | orchestrator | 2025-04-01 19:28:25.737612 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-01 19:28:25.738559 | orchestrator | Tuesday 01 April 2025 19:28:25 +0000 (0:00:02.116) 0:00:19.037 ********* 2025-04-01 19:28:27.234737 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:27.234912 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:28:27.235560 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:28:27.236252 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:28:27.240023 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:28:28.878788 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:28:28.878897 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:28:28.878915 | orchestrator | 2025-04-01 19:28:28.878932 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-01 19:28:28.878948 | orchestrator | Tuesday 01 April 2025 19:28:27 +0000 (0:00:01.510) 0:00:20.548 ********* 2025-04-01 19:28:28.878995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:28:28.880250 | orchestrator | 2025-04-01 19:28:28.883312 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-01 19:28:28.884606 | orchestrator | Tuesday 01 April 2025 19:28:28 +0000 (0:00:01.642) 0:00:22.190 ********* 2025-04-01 19:28:29.496851 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:29.946607 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:29.947374 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:29.947952 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:29.948966 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:29.949127 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:29.949156 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:29.949572 | orchestrator | 2025-04-01 19:28:29.953015 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-01 19:28:30.142213 | orchestrator | Tuesday 01 April 2025 19:28:29 +0000 (0:00:01.070) 0:00:23.260 ********* 2025-04-01 19:28:30.142345 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:30.229682 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:28:30.532369 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:28:30.629500 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:28:30.720873 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:28:30.881210 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:28:30.881863 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:28:30.881898 | orchestrator | 2025-04-01 19:28:30.882714 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-01 19:28:30.883124 | orchestrator | Tuesday 01 April 2025 19:28:30 +0000 (0:00:00.933) 0:00:24.193 ********* 2025-04-01 19:28:31.396236 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.396460 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.396524 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.397442 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.489845 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.491469 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.944855 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.945444 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.945904 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.946249 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.947283 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.949025 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.949143 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-01 19:28:31.949673 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-01 19:28:31.950183 | orchestrator | 2025-04-01 19:28:31.950634 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-01 19:28:31.951069 | orchestrator | Tuesday 01 April 2025 19:28:31 +0000 (0:00:01.067) 0:00:25.261 ********* 2025-04-01 19:28:32.340742 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:28:32.435670 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:28:32.529542 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:28:32.625424 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:28:32.719347 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:28:33.970102 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:28:33.972865 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:28:33.974504 | orchestrator | 2025-04-01 19:28:33.974541 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-01 19:28:33.975775 | orchestrator | Tuesday 01 April 2025 19:28:33 +0000 (0:00:02.020) 0:00:27.281 ********* 2025-04-01 19:28:34.142924 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:28:34.238611 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:28:34.546723 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:28:34.634179 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:28:34.726582 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:28:34.769170 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:28:34.770205 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:28:34.770841 | orchestrator | 2025-04-01 19:28:34.771553 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:28:34.771659 | orchestrator | 2025-04-01 19:28:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:28:34.771902 | orchestrator | 2025-04-01 19:28:34 | INFO  | Please wait and do not abort execution. 2025-04-01 19:28:34.772961 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.773119 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.773428 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.774112 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.774347 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.774779 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.775420 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:28:34.775850 | orchestrator | 2025-04-01 19:28:34.776412 | orchestrator | Tuesday 01 April 2025 19:28:34 +0000 (0:00:00.804) 0:00:28.086 ********* 2025-04-01 19:28:34.776854 | orchestrator | =============================================================================== 2025-04-01 19:28:34.777303 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.40s 2025-04-01 19:28:34.777741 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.12s 2025-04-01 19:28:34.778828 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-04-01 19:28:34.779537 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 2.02s 2025-04-01 19:28:34.779568 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.93s 2025-04-01 19:28:34.779848 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2025-04-01 19:28:34.780485 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.73s 2025-04-01 19:28:34.780805 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.64s 2025-04-01 19:28:34.781298 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.51s 2025-04-01 19:28:34.781764 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.35s 2025-04-01 19:28:34.782214 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.27s 2025-04-01 19:28:34.782679 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.18s 2025-04-01 19:28:34.783111 | orchestrator | osism.commons.network : Create required directories --------------------- 1.17s 2025-04-01 19:28:34.783629 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.07s 2025-04-01 19:28:34.783867 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.07s 2025-04-01 19:28:34.784558 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.95s 2025-04-01 19:28:34.784647 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.93s 2025-04-01 19:28:34.785012 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.82s 2025-04-01 19:28:34.785432 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.80s 2025-04-01 19:28:35.465113 | orchestrator | + osism apply wireguard 2025-04-01 19:28:37.150487 | orchestrator | 2025-04-01 19:28:37 | INFO  | Task c5d1e0d2-5011-4d48-9eca-47e827ae6ba5 (wireguard) was prepared for execution. 2025-04-01 19:28:41.501639 | orchestrator | 2025-04-01 19:28:37 | INFO  | It takes a moment until task c5d1e0d2-5011-4d48-9eca-47e827ae6ba5 (wireguard) has been started and output is visible here. 2025-04-01 19:28:41.501789 | orchestrator | 2025-04-01 19:28:41.501868 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-01 19:28:41.502247 | orchestrator | 2025-04-01 19:28:41.504313 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-01 19:28:41.504915 | orchestrator | Tuesday 01 April 2025 19:28:41 +0000 (0:00:00.206) 0:00:00.206 ********* 2025-04-01 19:28:43.307665 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:43.308145 | orchestrator | 2025-04-01 19:28:43.308187 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-01 19:28:43.308577 | orchestrator | Tuesday 01 April 2025 19:28:43 +0000 (0:00:01.805) 0:00:02.012 ********* 2025-04-01 19:28:50.637701 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:50.638673 | orchestrator | 2025-04-01 19:28:50.638716 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-01 19:28:50.638742 | orchestrator | Tuesday 01 April 2025 19:28:50 +0000 (0:00:07.329) 0:00:09.342 ********* 2025-04-01 19:28:51.205395 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:51.206394 | orchestrator | 2025-04-01 19:28:51.208079 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-01 19:28:51.209251 | orchestrator | Tuesday 01 April 2025 19:28:51 +0000 (0:00:00.569) 0:00:09.911 ********* 2025-04-01 19:28:51.681486 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:51.681967 | orchestrator | 2025-04-01 19:28:51.684495 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-01 19:28:51.685234 | orchestrator | Tuesday 01 April 2025 19:28:51 +0000 (0:00:00.477) 0:00:10.388 ********* 2025-04-01 19:28:52.204335 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:52.205133 | orchestrator | 2025-04-01 19:28:52.206301 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-01 19:28:52.207003 | orchestrator | Tuesday 01 April 2025 19:28:52 +0000 (0:00:00.521) 0:00:10.910 ********* 2025-04-01 19:28:52.846242 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:52.847132 | orchestrator | 2025-04-01 19:28:52.847993 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-01 19:28:52.849417 | orchestrator | Tuesday 01 April 2025 19:28:52 +0000 (0:00:00.642) 0:00:11.553 ********* 2025-04-01 19:28:53.350848 | orchestrator | ok: [testbed-manager] 2025-04-01 19:28:53.350989 | orchestrator | 2025-04-01 19:28:53.352392 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-01 19:28:53.353224 | orchestrator | Tuesday 01 April 2025 19:28:53 +0000 (0:00:00.504) 0:00:12.058 ********* 2025-04-01 19:28:54.722715 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:54.725672 | orchestrator | 2025-04-01 19:28:54.726069 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-01 19:28:54.727779 | orchestrator | Tuesday 01 April 2025 19:28:54 +0000 (0:00:01.370) 0:00:13.429 ********* 2025-04-01 19:28:55.770131 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-01 19:28:55.770637 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:55.772684 | orchestrator | 2025-04-01 19:28:57.807620 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-01 19:28:57.807729 | orchestrator | Tuesday 01 April 2025 19:28:55 +0000 (0:00:01.047) 0:00:14.477 ********* 2025-04-01 19:28:57.807763 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:57.808962 | orchestrator | 2025-04-01 19:28:57.809507 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-01 19:28:57.811798 | orchestrator | Tuesday 01 April 2025 19:28:57 +0000 (0:00:02.036) 0:00:16.513 ********* 2025-04-01 19:28:58.765465 | orchestrator | changed: [testbed-manager] 2025-04-01 19:28:58.765611 | orchestrator | 2025-04-01 19:28:58.765890 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:28:58.766421 | orchestrator | 2025-04-01 19:28:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:28:58.766705 | orchestrator | 2025-04-01 19:28:58 | INFO  | Please wait and do not abort execution. 2025-04-01 19:28:58.766994 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:28:58.767448 | orchestrator | 2025-04-01 19:28:58.767844 | orchestrator | Tuesday 01 April 2025 19:28:58 +0000 (0:00:00.960) 0:00:17.474 ********* 2025-04-01 19:28:58.768374 | orchestrator | =============================================================================== 2025-04-01 19:28:58.768626 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.33s 2025-04-01 19:28:58.769519 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.04s 2025-04-01 19:28:58.769982 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.81s 2025-04-01 19:28:58.770233 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.37s 2025-04-01 19:28:58.770675 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.05s 2025-04-01 19:28:58.770900 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-04-01 19:28:58.771367 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.64s 2025-04-01 19:28:58.771668 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-04-01 19:28:58.772338 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-04-01 19:28:58.772439 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.50s 2025-04-01 19:28:58.772463 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2025-04-01 19:28:59.398771 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-01 19:28:59.438106 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-01 19:28:59.516894 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-01 19:28:59.516987 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 177 0 --:--:-- --:--:-- --:--:-- 179 2025-04-01 19:28:59.529599 | orchestrator | + osism apply --environment custom workarounds 2025-04-01 19:29:01.128250 | orchestrator | 2025-04-01 19:29:01 | INFO  | Trying to run play workarounds in environment custom 2025-04-01 19:29:01.183099 | orchestrator | 2025-04-01 19:29:01 | INFO  | Task 8b0e0da3-2905-40fd-9438-c34087c9650d (workarounds) was prepared for execution. 2025-04-01 19:29:04.630713 | orchestrator | 2025-04-01 19:29:01 | INFO  | It takes a moment until task 8b0e0da3-2905-40fd-9438-c34087c9650d (workarounds) has been started and output is visible here. 2025-04-01 19:29:04.630855 | orchestrator | 2025-04-01 19:29:04.632788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:29:04.632861 | orchestrator | 2025-04-01 19:29:04.634838 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-01 19:29:04.635738 | orchestrator | Tuesday 01 April 2025 19:29:04 +0000 (0:00:00.170) 0:00:00.170 ********* 2025-04-01 19:29:04.804074 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-01 19:29:04.907464 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-01 19:29:04.997414 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-01 19:29:05.093355 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-01 19:29:05.191049 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-01 19:29:05.501122 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-01 19:29:05.501359 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-01 19:29:05.502673 | orchestrator | 2025-04-01 19:29:05.503389 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-01 19:29:05.504445 | orchestrator | 2025-04-01 19:29:05.504781 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-01 19:29:05.505495 | orchestrator | Tuesday 01 April 2025 19:29:05 +0000 (0:00:00.868) 0:00:01.039 ********* 2025-04-01 19:29:08.487570 | orchestrator | ok: [testbed-manager] 2025-04-01 19:29:08.489570 | orchestrator | 2025-04-01 19:29:08.493383 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-01 19:29:08.493697 | orchestrator | 2025-04-01 19:29:08.498053 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-01 19:29:08.501589 | orchestrator | Tuesday 01 April 2025 19:29:08 +0000 (0:00:02.984) 0:00:04.023 ********* 2025-04-01 19:29:10.497423 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:29:10.498117 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:29:10.499622 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:29:10.500423 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:29:10.501385 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:29:10.503165 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:29:10.503846 | orchestrator | 2025-04-01 19:29:10.504335 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-01 19:29:10.505220 | orchestrator | 2025-04-01 19:29:10.506148 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-01 19:29:10.507235 | orchestrator | Tuesday 01 April 2025 19:29:10 +0000 (0:00:02.013) 0:00:06.036 ********* 2025-04-01 19:29:12.023089 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-01 19:29:12.025083 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-01 19:29:12.025696 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-01 19:29:12.026417 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-01 19:29:12.027661 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-01 19:29:12.030098 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-01 19:29:12.030818 | orchestrator | 2025-04-01 19:29:12.030841 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-01 19:29:12.030854 | orchestrator | Tuesday 01 April 2025 19:29:12 +0000 (0:00:01.523) 0:00:07.560 ********* 2025-04-01 19:29:14.358797 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:29:14.359983 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:29:14.361578 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:29:14.363064 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:29:14.363941 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:29:14.364915 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:29:14.365590 | orchestrator | 2025-04-01 19:29:14.366402 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-01 19:29:14.367656 | orchestrator | Tuesday 01 April 2025 19:29:14 +0000 (0:00:02.340) 0:00:09.900 ********* 2025-04-01 19:29:14.526184 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:29:14.612573 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:29:14.695394 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:29:14.960137 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:29:15.108751 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:29:15.110131 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:29:15.110166 | orchestrator | 2025-04-01 19:29:15.111181 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-01 19:29:15.112734 | orchestrator | 2025-04-01 19:29:15.114222 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-01 19:29:15.115176 | orchestrator | Tuesday 01 April 2025 19:29:15 +0000 (0:00:00.738) 0:00:10.639 ********* 2025-04-01 19:29:17.077465 | orchestrator | changed: [testbed-manager] 2025-04-01 19:29:17.079336 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:29:17.079963 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:29:17.081353 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:29:17.082703 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:29:17.083456 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:29:17.085297 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:29:17.086185 | orchestrator | 2025-04-01 19:29:17.086873 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-01 19:29:17.087770 | orchestrator | Tuesday 01 April 2025 19:29:17 +0000 (0:00:01.979) 0:00:12.619 ********* 2025-04-01 19:29:19.016200 | orchestrator | changed: [testbed-manager] 2025-04-01 19:29:19.017176 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:29:19.018131 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:29:19.019069 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:29:19.020019 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:29:19.020687 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:29:19.021514 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:29:19.022313 | orchestrator | 2025-04-01 19:29:19.023318 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-01 19:29:19.023986 | orchestrator | Tuesday 01 April 2025 19:29:19 +0000 (0:00:01.935) 0:00:14.555 ********* 2025-04-01 19:29:20.796356 | orchestrator | ok: [testbed-manager] 2025-04-01 19:29:20.800797 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:29:20.801176 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:29:20.802138 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:29:20.803076 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:29:20.804119 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:29:20.808389 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:29:20.809231 | orchestrator | 2025-04-01 19:29:20.810334 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-01 19:29:20.811342 | orchestrator | Tuesday 01 April 2025 19:29:20 +0000 (0:00:01.782) 0:00:16.337 ********* 2025-04-01 19:29:22.661757 | orchestrator | changed: [testbed-manager] 2025-04-01 19:29:22.663691 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:29:22.664526 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:29:22.665207 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:29:22.666413 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:29:22.668910 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:29:22.669540 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:29:22.670092 | orchestrator | 2025-04-01 19:29:22.670428 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-01 19:29:22.671004 | orchestrator | Tuesday 01 April 2025 19:29:22 +0000 (0:00:01.866) 0:00:18.203 ********* 2025-04-01 19:29:22.846965 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:29:22.950312 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:29:23.034472 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:29:23.120678 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:29:23.391048 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:29:23.546786 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:29:23.547816 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:29:23.549163 | orchestrator | 2025-04-01 19:29:23.549964 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-01 19:29:23.550914 | orchestrator | 2025-04-01 19:29:23.551953 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-01 19:29:23.553428 | orchestrator | Tuesday 01 April 2025 19:29:23 +0000 (0:00:00.886) 0:00:19.090 ********* 2025-04-01 19:29:26.756948 | orchestrator | ok: [testbed-manager] 2025-04-01 19:29:26.757526 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:29:26.757556 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:29:26.757577 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:29:26.758423 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:29:26.758454 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:29:26.758966 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:29:26.761302 | orchestrator | 2025-04-01 19:29:26.762147 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:29:26.762375 | orchestrator | 2025-04-01 19:29:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:29:26.762665 | orchestrator | 2025-04-01 19:29:26 | INFO  | Please wait and do not abort execution. 2025-04-01 19:29:26.763603 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:29:26.764524 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:26.765004 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:26.765898 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:26.766295 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:26.766708 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:26.767367 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:26.767711 | orchestrator | 2025-04-01 19:29:26.768400 | orchestrator | Tuesday 01 April 2025 19:29:26 +0000 (0:00:03.207) 0:00:22.298 ********* 2025-04-01 19:29:26.768763 | orchestrator | =============================================================================== 2025-04-01 19:29:26.769042 | orchestrator | Install python3-docker -------------------------------------------------- 3.21s 2025-04-01 19:29:26.769578 | orchestrator | Apply netplan configuration --------------------------------------------- 2.98s 2025-04-01 19:29:26.769873 | orchestrator | Run update-ca-certificates ---------------------------------------------- 2.34s 2025-04-01 19:29:26.770421 | orchestrator | Apply netplan configuration --------------------------------------------- 2.01s 2025-04-01 19:29:26.770639 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.98s 2025-04-01 19:29:26.771133 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.94s 2025-04-01 19:29:26.771773 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.87s 2025-04-01 19:29:26.772120 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.78s 2025-04-01 19:29:26.772857 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2025-04-01 19:29:26.773358 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.89s 2025-04-01 19:29:26.773388 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.87s 2025-04-01 19:29:26.773475 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2025-04-01 19:29:27.452650 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-01 19:29:29.101343 | orchestrator | 2025-04-01 19:29:29 | INFO  | Task 43f4ae99-c92b-47c6-a161-a90915b1de37 (reboot) was prepared for execution. 2025-04-01 19:29:32.657042 | orchestrator | 2025-04-01 19:29:29 | INFO  | It takes a moment until task 43f4ae99-c92b-47c6-a161-a90915b1de37 (reboot) has been started and output is visible here. 2025-04-01 19:29:32.657222 | orchestrator | 2025-04-01 19:29:32.658624 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-01 19:29:32.660419 | orchestrator | 2025-04-01 19:29:32.660962 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-01 19:29:32.661857 | orchestrator | Tuesday 01 April 2025 19:29:32 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-04-01 19:29:32.761523 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:29:32.762312 | orchestrator | 2025-04-01 19:29:32.762820 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-01 19:29:32.763379 | orchestrator | Tuesday 01 April 2025 19:29:32 +0000 (0:00:00.107) 0:00:00.280 ********* 2025-04-01 19:29:33.735156 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:29:33.736232 | orchestrator | 2025-04-01 19:29:33.736753 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-01 19:29:33.738705 | orchestrator | Tuesday 01 April 2025 19:29:33 +0000 (0:00:00.973) 0:00:01.254 ********* 2025-04-01 19:29:33.873176 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:29:33.874108 | orchestrator | 2025-04-01 19:29:33.876049 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-01 19:29:33.876814 | orchestrator | 2025-04-01 19:29:33.878355 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-01 19:29:33.879352 | orchestrator | Tuesday 01 April 2025 19:29:33 +0000 (0:00:00.138) 0:00:01.392 ********* 2025-04-01 19:29:33.977013 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:29:33.977135 | orchestrator | 2025-04-01 19:29:33.977972 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-01 19:29:33.979991 | orchestrator | Tuesday 01 April 2025 19:29:33 +0000 (0:00:00.103) 0:00:01.496 ********* 2025-04-01 19:29:34.670394 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:29:34.671430 | orchestrator | 2025-04-01 19:29:34.672103 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-01 19:29:34.673879 | orchestrator | Tuesday 01 April 2025 19:29:34 +0000 (0:00:00.693) 0:00:02.190 ********* 2025-04-01 19:29:34.798550 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:29:34.800703 | orchestrator | 2025-04-01 19:29:34.801470 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-01 19:29:34.802178 | orchestrator | 2025-04-01 19:29:34.803340 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-01 19:29:34.804228 | orchestrator | Tuesday 01 April 2025 19:29:34 +0000 (0:00:00.126) 0:00:02.316 ********* 2025-04-01 19:29:34.909980 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:29:34.910370 | orchestrator | 2025-04-01 19:29:34.911608 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-01 19:29:34.912400 | orchestrator | Tuesday 01 April 2025 19:29:34 +0000 (0:00:00.110) 0:00:02.427 ********* 2025-04-01 19:29:35.748334 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:29:35.749533 | orchestrator | 2025-04-01 19:29:35.751198 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-01 19:29:35.753030 | orchestrator | Tuesday 01 April 2025 19:29:35 +0000 (0:00:00.840) 0:00:03.267 ********* 2025-04-01 19:29:35.910541 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:29:35.911151 | orchestrator | 2025-04-01 19:29:35.914763 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-01 19:29:35.914998 | orchestrator | 2025-04-01 19:29:35.915981 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-01 19:29:35.916251 | orchestrator | Tuesday 01 April 2025 19:29:35 +0000 (0:00:00.158) 0:00:03.425 ********* 2025-04-01 19:29:36.024635 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:29:36.024844 | orchestrator | 2025-04-01 19:29:36.024874 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-01 19:29:36.707984 | orchestrator | Tuesday 01 April 2025 19:29:36 +0000 (0:00:00.118) 0:00:03.544 ********* 2025-04-01 19:29:36.708170 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:29:36.708248 | orchestrator | 2025-04-01 19:29:36.708304 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-01 19:29:36.709135 | orchestrator | Tuesday 01 April 2025 19:29:36 +0000 (0:00:00.682) 0:00:04.227 ********* 2025-04-01 19:29:36.833166 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:29:36.833597 | orchestrator | 2025-04-01 19:29:36.836143 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-01 19:29:36.836244 | orchestrator | 2025-04-01 19:29:36.836268 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-01 19:29:36.837169 | orchestrator | Tuesday 01 April 2025 19:29:36 +0000 (0:00:00.121) 0:00:04.349 ********* 2025-04-01 19:29:36.959964 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:29:36.960161 | orchestrator | 2025-04-01 19:29:36.960988 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-01 19:29:37.683978 | orchestrator | Tuesday 01 April 2025 19:29:36 +0000 (0:00:00.130) 0:00:04.480 ********* 2025-04-01 19:29:37.684195 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:29:37.684342 | orchestrator | 2025-04-01 19:29:37.684702 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-01 19:29:37.685079 | orchestrator | Tuesday 01 April 2025 19:29:37 +0000 (0:00:00.723) 0:00:05.204 ********* 2025-04-01 19:29:37.824523 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:29:37.824649 | orchestrator | 2025-04-01 19:29:37.824963 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-01 19:29:37.827022 | orchestrator | 2025-04-01 19:29:37.827691 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-01 19:29:37.828339 | orchestrator | Tuesday 01 April 2025 19:29:37 +0000 (0:00:00.135) 0:00:05.339 ********* 2025-04-01 19:29:37.920263 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:29:37.921459 | orchestrator | 2025-04-01 19:29:37.922169 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-01 19:29:37.923104 | orchestrator | Tuesday 01 April 2025 19:29:37 +0000 (0:00:00.099) 0:00:05.439 ********* 2025-04-01 19:29:38.563970 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:29:38.564125 | orchestrator | 2025-04-01 19:29:38.564154 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-01 19:29:38.564510 | orchestrator | Tuesday 01 April 2025 19:29:38 +0000 (0:00:00.644) 0:00:06.083 ********* 2025-04-01 19:29:38.609864 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:29:38.610113 | orchestrator | 2025-04-01 19:29:38.611260 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:29:38.612003 | orchestrator | 2025-04-01 19:29:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:29:38.613211 | orchestrator | 2025-04-01 19:29:38 | INFO  | Please wait and do not abort execution. 2025-04-01 19:29:38.614321 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:38.615194 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:38.616204 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:38.617256 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:38.617690 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:38.618704 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:29:38.619558 | orchestrator | 2025-04-01 19:29:38.620728 | orchestrator | Tuesday 01 April 2025 19:29:38 +0000 (0:00:00.044) 0:00:06.127 ********* 2025-04-01 19:29:38.622179 | orchestrator | =============================================================================== 2025-04-01 19:29:38.623031 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.56s 2025-04-01 19:29:38.623543 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.73s 2025-04-01 19:29:38.624190 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.67s 2025-04-01 19:29:39.217329 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-01 19:29:40.867654 | orchestrator | 2025-04-01 19:29:40 | INFO  | Task 2d17adf6-b8d0-4e97-8be9-8e6a62a1e0b5 (wait-for-connection) was prepared for execution. 2025-04-01 19:29:44.358787 | orchestrator | 2025-04-01 19:29:40 | INFO  | It takes a moment until task 2d17adf6-b8d0-4e97-8be9-8e6a62a1e0b5 (wait-for-connection) has been started and output is visible here. 2025-04-01 19:29:44.358915 | orchestrator | 2025-04-01 19:29:44.358984 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-01 19:29:44.364475 | orchestrator | 2025-04-01 19:29:44.365444 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-01 19:29:44.366431 | orchestrator | Tuesday 01 April 2025 19:29:44 +0000 (0:00:00.217) 0:00:00.217 ********* 2025-04-01 19:29:55.901957 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:29:55.902169 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:29:55.902484 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:29:55.902518 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:29:55.903894 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:29:55.904965 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:29:55.906093 | orchestrator | 2025-04-01 19:29:55.907503 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:29:55.908154 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:29:55.908201 | orchestrator | 2025-04-01 19:29:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:29:55.908896 | orchestrator | 2025-04-01 19:29:55 | INFO  | Please wait and do not abort execution. 2025-04-01 19:29:55.908930 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:29:55.909841 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:29:55.910953 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:29:55.911922 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:29:55.912789 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:29:55.914076 | orchestrator | 2025-04-01 19:29:55.914744 | orchestrator | Tuesday 01 April 2025 19:29:55 +0000 (0:00:11.541) 0:00:11.758 ********* 2025-04-01 19:29:55.915752 | orchestrator | =============================================================================== 2025-04-01 19:29:55.916650 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2025-04-01 19:29:56.581830 | orchestrator | + osism apply hddtemp 2025-04-01 19:29:58.163978 | orchestrator | 2025-04-01 19:29:58 | INFO  | Task 4eabc571-e101-43f1-bc14-96e83ec45185 (hddtemp) was prepared for execution. 2025-04-01 19:30:01.660068 | orchestrator | 2025-04-01 19:29:58 | INFO  | It takes a moment until task 4eabc571-e101-43f1-bc14-96e83ec45185 (hddtemp) has been started and output is visible here. 2025-04-01 19:30:01.660206 | orchestrator | 2025-04-01 19:30:01.660994 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-01 19:30:01.661986 | orchestrator | 2025-04-01 19:30:01.662114 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-01 19:30:01.662137 | orchestrator | Tuesday 01 April 2025 19:30:01 +0000 (0:00:00.233) 0:00:00.233 ********* 2025-04-01 19:30:01.847947 | orchestrator | ok: [testbed-manager] 2025-04-01 19:30:01.926854 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:30:02.023442 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:30:02.114987 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:30:02.207338 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:30:02.474780 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:30:02.475392 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:30:02.475448 | orchestrator | 2025-04-01 19:30:02.475836 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-01 19:30:02.476427 | orchestrator | Tuesday 01 April 2025 19:30:02 +0000 (0:00:00.812) 0:00:01.046 ********* 2025-04-01 19:30:03.807391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:30:03.808470 | orchestrator | 2025-04-01 19:30:03.808522 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-01 19:30:03.809732 | orchestrator | Tuesday 01 April 2025 19:30:03 +0000 (0:00:01.331) 0:00:02.378 ********* 2025-04-01 19:30:06.209329 | orchestrator | ok: [testbed-manager] 2025-04-01 19:30:06.209495 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:30:06.209521 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:30:06.209843 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:30:06.210401 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:30:06.210617 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:30:06.214752 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:30:06.215419 | orchestrator | 2025-04-01 19:30:06.216061 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-01 19:30:06.216553 | orchestrator | Tuesday 01 April 2025 19:30:06 +0000 (0:00:02.405) 0:00:04.784 ********* 2025-04-01 19:30:06.920362 | orchestrator | changed: [testbed-manager] 2025-04-01 19:30:07.011938 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:30:07.531476 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:30:07.531661 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:30:07.532348 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:30:07.532997 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:30:07.536139 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:30:08.986660 | orchestrator | 2025-04-01 19:30:08.986724 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-01 19:30:08.986741 | orchestrator | Tuesday 01 April 2025 19:30:07 +0000 (0:00:01.318) 0:00:06.102 ********* 2025-04-01 19:30:08.986765 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:30:08.987894 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:30:08.988589 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:30:08.990421 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:30:08.991416 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:30:08.992551 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:30:08.993704 | orchestrator | ok: [testbed-manager] 2025-04-01 19:30:08.994427 | orchestrator | 2025-04-01 19:30:08.994972 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-01 19:30:08.996486 | orchestrator | Tuesday 01 April 2025 19:30:08 +0000 (0:00:01.453) 0:00:07.555 ********* 2025-04-01 19:30:09.305831 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:30:09.416015 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:30:09.524242 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:30:09.630871 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:30:09.759780 | orchestrator | changed: [testbed-manager] 2025-04-01 19:30:09.761240 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:30:09.762599 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:30:09.763095 | orchestrator | 2025-04-01 19:30:09.763973 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-01 19:30:09.764232 | orchestrator | Tuesday 01 April 2025 19:30:09 +0000 (0:00:00.778) 0:00:08.334 ********* 2025-04-01 19:30:20.404385 | orchestrator | changed: [testbed-manager] 2025-04-01 19:30:20.404745 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:30:20.404777 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:30:20.406241 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:30:20.407662 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:30:20.408486 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:30:20.408866 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:30:20.409715 | orchestrator | 2025-04-01 19:30:20.410099 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-01 19:30:20.410825 | orchestrator | Tuesday 01 April 2025 19:30:20 +0000 (0:00:10.639) 0:00:18.973 ********* 2025-04-01 19:30:21.704360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:30:21.708439 | orchestrator | 2025-04-01 19:30:23.948782 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-01 19:30:23.948927 | orchestrator | Tuesday 01 April 2025 19:30:21 +0000 (0:00:01.301) 0:00:20.275 ********* 2025-04-01 19:30:23.948981 | orchestrator | changed: [testbed-manager] 2025-04-01 19:30:23.949211 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:30:23.949835 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:30:23.949888 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:30:23.950333 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:30:23.951947 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:30:23.953646 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:30:23.954408 | orchestrator | 2025-04-01 19:30:23.955041 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:30:23.955568 | orchestrator | 2025-04-01 19:30:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:30:23.956525 | orchestrator | 2025-04-01 19:30:23 | INFO  | Please wait and do not abort execution. 2025-04-01 19:30:23.957950 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:30:23.958402 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:23.959242 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:23.959773 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:23.960885 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:23.961624 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:23.961939 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:23.962444 | orchestrator | 2025-04-01 19:30:23.963068 | orchestrator | Tuesday 01 April 2025 19:30:23 +0000 (0:00:02.248) 0:00:22.523 ********* 2025-04-01 19:30:23.963510 | orchestrator | =============================================================================== 2025-04-01 19:30:23.963834 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 10.64s 2025-04-01 19:30:23.964385 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.41s 2025-04-01 19:30:23.964795 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.25s 2025-04-01 19:30:23.964891 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.45s 2025-04-01 19:30:23.965573 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2025-04-01 19:30:23.966115 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.32s 2025-04-01 19:30:23.966500 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.30s 2025-04-01 19:30:23.967130 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.81s 2025-04-01 19:30:23.967456 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.78s 2025-04-01 19:30:24.727006 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-01 19:30:26.069938 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-01 19:30:26.070489 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-01 19:30:26.070542 | orchestrator | + local max_attempts=60 2025-04-01 19:30:26.070560 | orchestrator | + local name=ceph-ansible 2025-04-01 19:30:26.070576 | orchestrator | + local attempt_num=1 2025-04-01 19:30:26.070599 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-01 19:30:26.110141 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-01 19:30:26.110335 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-01 19:30:26.110362 | orchestrator | + local max_attempts=60 2025-04-01 19:30:26.110377 | orchestrator | + local name=kolla-ansible 2025-04-01 19:30:26.110392 | orchestrator | + local attempt_num=1 2025-04-01 19:30:26.110412 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-01 19:30:26.138007 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-01 19:30:26.138165 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-01 19:30:26.138186 | orchestrator | + local max_attempts=60 2025-04-01 19:30:26.138202 | orchestrator | + local name=osism-ansible 2025-04-01 19:30:26.138216 | orchestrator | + local attempt_num=1 2025-04-01 19:30:26.138235 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-01 19:30:26.168648 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-01 19:30:26.344232 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-01 19:30:26.344364 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-01 19:30:26.344394 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-01 19:30:26.494829 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-01 19:30:26.664144 | orchestrator | ARA in osism-ansible already disabled. 2025-04-01 19:30:26.851015 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-01 19:30:26.851923 | orchestrator | + osism apply gather-facts 2025-04-01 19:30:28.420496 | orchestrator | 2025-04-01 19:30:28 | INFO  | Task 505bdcea-874c-4548-bd02-79107d03a024 (gather-facts) was prepared for execution. 2025-04-01 19:30:31.934928 | orchestrator | 2025-04-01 19:30:28 | INFO  | It takes a moment until task 505bdcea-874c-4548-bd02-79107d03a024 (gather-facts) has been started and output is visible here. 2025-04-01 19:30:31.935118 | orchestrator | 2025-04-01 19:30:31.935207 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-01 19:30:31.935724 | orchestrator | 2025-04-01 19:30:31.936745 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-01 19:30:31.936776 | orchestrator | Tuesday 01 April 2025 19:30:31 +0000 (0:00:00.185) 0:00:00.185 ********* 2025-04-01 19:30:36.637845 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:30:36.638585 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:30:36.638621 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:30:36.642468 | orchestrator | ok: [testbed-manager] 2025-04-01 19:30:36.642892 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:30:36.644623 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:30:36.645583 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:30:36.646382 | orchestrator | 2025-04-01 19:30:36.647169 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-01 19:30:36.647790 | orchestrator | 2025-04-01 19:30:36.648444 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-01 19:30:36.649196 | orchestrator | Tuesday 01 April 2025 19:30:36 +0000 (0:00:04.703) 0:00:04.888 ********* 2025-04-01 19:30:36.879615 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:30:37.012694 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:30:37.150474 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:30:37.293985 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:30:37.407479 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:30:37.456217 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:30:37.456394 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:30:37.456834 | orchestrator | 2025-04-01 19:30:37.457261 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:30:37.457502 | orchestrator | 2025-04-01 19:30:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:30:37.457679 | orchestrator | 2025-04-01 19:30:37 | INFO  | Please wait and do not abort execution. 2025-04-01 19:30:37.458376 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.458863 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.459083 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.459478 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.459825 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.460068 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.461002 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:30:37.461869 | orchestrator | 2025-04-01 19:30:37.462430 | orchestrator | Tuesday 01 April 2025 19:30:37 +0000 (0:00:00.822) 0:00:05.711 ********* 2025-04-01 19:30:37.463042 | orchestrator | =============================================================================== 2025-04-01 19:30:37.463499 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.70s 2025-04-01 19:30:37.463966 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.82s 2025-04-01 19:30:38.158702 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-01 19:30:38.174158 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-01 19:30:38.191359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-01 19:30:38.206947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-01 19:30:38.223848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-01 19:30:38.248733 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-01 19:30:38.268619 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-01 19:30:38.289630 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-01 19:30:38.306242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-01 19:30:38.327034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-01 19:30:38.342596 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-01 19:30:38.356651 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-01 19:30:38.370812 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-01 19:30:38.385585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-01 19:30:38.399637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-01 19:30:38.415116 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-01 19:30:38.427849 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-01 19:30:38.441974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-01 19:30:38.457081 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-01 19:30:38.470585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-01 19:30:38.483753 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-01 19:30:38.603737 | orchestrator | changed 2025-04-01 19:30:38.656023 | 2025-04-01 19:30:38.656120 | TASK [Deploy services] 2025-04-01 19:30:38.762076 | orchestrator | skipping: Conditional result was False 2025-04-01 19:30:38.774840 | 2025-04-01 19:30:38.774928 | TASK [Deploy in a nutshell] 2025-04-01 19:30:39.419687 | orchestrator | + set -e 2025-04-01 19:30:39.420791 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-01 19:30:39.420835 | orchestrator | ++ export INTERACTIVE=false 2025-04-01 19:30:39.420853 | orchestrator | ++ INTERACTIVE=false 2025-04-01 19:30:39.420896 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-01 19:30:39.420915 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-01 19:30:39.420931 | orchestrator | + source /opt/manager-vars.sh 2025-04-01 19:30:39.420954 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-01 19:30:39.420978 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-01 19:30:39.420994 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-01 19:30:39.421009 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-01 19:30:39.421023 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-01 19:30:39.421037 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-01 19:30:39.421051 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-01 19:30:39.421066 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-01 19:30:39.421080 | orchestrator | 2025-04-01 19:30:39.421095 | orchestrator | # PULL IMAGES 2025-04-01 19:30:39.421109 | orchestrator | 2025-04-01 19:30:39.421123 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-01 19:30:39.421137 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-01 19:30:39.421151 | orchestrator | ++ export ARA=false 2025-04-01 19:30:39.421164 | orchestrator | ++ ARA=false 2025-04-01 19:30:39.421178 | orchestrator | ++ export TEMPEST=false 2025-04-01 19:30:39.421192 | orchestrator | ++ TEMPEST=false 2025-04-01 19:30:39.421206 | orchestrator | ++ export IS_ZUUL=true 2025-04-01 19:30:39.421220 | orchestrator | ++ IS_ZUUL=true 2025-04-01 19:30:39.421234 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:30:39.421248 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.82 2025-04-01 19:30:39.421262 | orchestrator | ++ export EXTERNAL_API=false 2025-04-01 19:30:39.421276 | orchestrator | ++ EXTERNAL_API=false 2025-04-01 19:30:39.421314 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-01 19:30:39.421329 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-01 19:30:39.421351 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-01 19:30:39.421365 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-01 19:30:39.421379 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-01 19:30:39.421393 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-01 19:30:39.421407 | orchestrator | + echo 2025-04-01 19:30:39.421421 | orchestrator | + echo '# PULL IMAGES' 2025-04-01 19:30:39.421435 | orchestrator | + echo 2025-04-01 19:30:39.421459 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-01 19:30:39.480560 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-01 19:30:41.063642 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-01 19:30:41.063802 | orchestrator | 2025-04-01 19:30:41 | INFO  | Trying to run play pull-images in environment custom 2025-04-01 19:30:41.116669 | orchestrator | 2025-04-01 19:30:41 | INFO  | Task 7063631b-2793-4337-968a-aeefd899e8f1 (pull-images) was prepared for execution. 2025-04-01 19:30:44.677467 | orchestrator | 2025-04-01 19:30:41 | INFO  | It takes a moment until task 7063631b-2793-4337-968a-aeefd899e8f1 (pull-images) has been started and output is visible here. 2025-04-01 19:30:44.677584 | orchestrator | 2025-04-01 19:30:44.678830 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-01 19:30:44.680678 | orchestrator | 2025-04-01 19:30:44.681764 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-01 19:30:44.683654 | orchestrator | Tuesday 01 April 2025 19:30:44 +0000 (0:00:00.157) 0:00:00.157 ********* 2025-04-01 19:31:18.527530 | orchestrator | changed: [testbed-manager] 2025-04-01 19:32:15.513484 | orchestrator | 2025-04-01 19:32:15.513603 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-01 19:32:15.513615 | orchestrator | Tuesday 01 April 2025 19:31:18 +0000 (0:00:33.851) 0:00:34.009 ********* 2025-04-01 19:32:15.513639 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-01 19:32:15.513715 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-01 19:32:15.513727 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-01 19:32:15.513736 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-01 19:32:15.513758 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-01 19:32:15.513770 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-01 19:32:15.517064 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-01 19:32:15.517957 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-01 19:32:15.518161 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-01 19:32:15.518178 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-01 19:32:15.521224 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-01 19:32:15.521383 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-01 19:32:15.521596 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-01 19:32:15.522942 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-01 19:32:15.528017 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-01 19:32:15.531255 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-01 19:32:15.531421 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-01 19:32:15.531732 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-01 19:32:15.531997 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-01 19:32:15.532281 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-01 19:32:15.532583 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-01 19:32:15.534346 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-01 19:32:15.537846 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-01 19:32:15.538127 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-01 19:32:15.538150 | orchestrator | 2025-04-01 19:32:15.538798 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:32:15.542011 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:32:15.542536 | orchestrator | 2025-04-01 19:32:15.542561 | orchestrator | Tuesday 01 April 2025 19:32:15 +0000 (0:00:56.981) 0:01:30.991 ********* 2025-04-01 19:32:15.548972 | orchestrator | =============================================================================== 2025-04-01 19:32:15.549048 | orchestrator | 2025-04-01 19:32:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:32:15.549062 | orchestrator | 2025-04-01 19:32:15 | INFO  | Please wait and do not abort execution. 2025-04-01 19:32:15.549085 | orchestrator | Pull other images ------------------------------------------------------ 56.98s 2025-04-01 19:32:17.811235 | orchestrator | Pull keystone image ---------------------------------------------------- 33.85s 2025-04-01 19:32:17.811377 | orchestrator | 2025-04-01 19:32:17 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-01 19:32:17.860927 | orchestrator | 2025-04-01 19:32:17 | INFO  | Task 463d04a0-25a5-4a21-b962-72bf6b9b40df (wipe-partitions) was prepared for execution. 2025-04-01 19:32:21.252785 | orchestrator | 2025-04-01 19:32:17 | INFO  | It takes a moment until task 463d04a0-25a5-4a21-b962-72bf6b9b40df (wipe-partitions) has been started and output is visible here. 2025-04-01 19:32:21.607707 | orchestrator | 2025-04-01 19:32:22.932142 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-01 19:32:22.932363 | orchestrator | 2025-04-01 19:32:22.932394 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-01 19:32:22.932409 | orchestrator | Tuesday 01 April 2025 19:32:21 +0000 (0:00:00.133) 0:00:00.133 ********* 2025-04-01 19:32:22.932440 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:32:22.932523 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:32:22.933389 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:32:22.940035 | orchestrator | 2025-04-01 19:32:23.100429 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-01 19:32:23.100517 | orchestrator | Tuesday 01 April 2025 19:32:22 +0000 (0:00:01.680) 0:00:01.813 ********* 2025-04-01 19:32:23.100547 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:23.221762 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:32:23.222112 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:32:23.222899 | orchestrator | 2025-04-01 19:32:23.223232 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-01 19:32:23.223838 | orchestrator | Tuesday 01 April 2025 19:32:23 +0000 (0:00:00.287) 0:00:02.100 ********* 2025-04-01 19:32:24.089505 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:32:24.090872 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:32:24.091384 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:32:24.092930 | orchestrator | 2025-04-01 19:32:24.094102 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-01 19:32:24.095760 | orchestrator | Tuesday 01 April 2025 19:32:24 +0000 (0:00:00.867) 0:00:02.967 ********* 2025-04-01 19:32:24.265619 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:24.371686 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:32:24.373127 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:32:24.373973 | orchestrator | 2025-04-01 19:32:24.375022 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-01 19:32:24.375430 | orchestrator | Tuesday 01 April 2025 19:32:24 +0000 (0:00:00.285) 0:00:03.253 ********* 2025-04-01 19:32:25.838881 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-01 19:32:25.839598 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-01 19:32:25.843728 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-01 19:32:25.844982 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-01 19:32:25.845433 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-01 19:32:25.847112 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-01 19:32:25.848363 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-01 19:32:25.849204 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-01 19:32:25.849253 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-01 19:32:25.850940 | orchestrator | 2025-04-01 19:32:25.851911 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-01 19:32:25.851944 | orchestrator | Tuesday 01 April 2025 19:32:25 +0000 (0:00:01.462) 0:00:04.715 ********* 2025-04-01 19:32:27.141031 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-01 19:32:27.142185 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-01 19:32:27.143504 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-01 19:32:27.144407 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-01 19:32:27.147624 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-01 19:32:27.148145 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-01 19:32:27.149970 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-01 19:32:27.150516 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-01 19:32:27.151051 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-01 19:32:27.151590 | orchestrator | 2025-04-01 19:32:27.152284 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-01 19:32:27.154137 | orchestrator | Tuesday 01 April 2025 19:32:27 +0000 (0:00:01.304) 0:00:06.020 ********* 2025-04-01 19:32:29.412689 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-01 19:32:29.413134 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-01 19:32:29.414541 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-01 19:32:29.415508 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-01 19:32:29.420517 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-01 19:32:29.421134 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-01 19:32:29.421170 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-01 19:32:29.422474 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-01 19:32:29.422962 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-01 19:32:29.422991 | orchestrator | 2025-04-01 19:32:29.423359 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-01 19:32:29.424174 | orchestrator | Tuesday 01 April 2025 19:32:29 +0000 (0:00:02.268) 0:00:08.288 ********* 2025-04-01 19:32:30.008460 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:32:30.008825 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:32:30.008971 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:32:30.009758 | orchestrator | 2025-04-01 19:32:30.010227 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-01 19:32:30.011193 | orchestrator | Tuesday 01 April 2025 19:32:30 +0000 (0:00:00.601) 0:00:08.890 ********* 2025-04-01 19:32:30.656593 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:32:30.657263 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:32:30.658558 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:32:30.659000 | orchestrator | 2025-04-01 19:32:30.659590 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:32:30.659888 | orchestrator | 2025-04-01 19:32:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:32:30.660177 | orchestrator | 2025-04-01 19:32:30 | INFO  | Please wait and do not abort execution. 2025-04-01 19:32:30.661138 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:30.661526 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:30.662116 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:30.662748 | orchestrator | 2025-04-01 19:32:30.663105 | orchestrator | Tuesday 01 April 2025 19:32:30 +0000 (0:00:00.647) 0:00:09.537 ********* 2025-04-01 19:32:30.663514 | orchestrator | =============================================================================== 2025-04-01 19:32:30.663950 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.27s 2025-04-01 19:32:30.664243 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.68s 2025-04-01 19:32:30.664879 | orchestrator | Check device availability ----------------------------------------------- 1.46s 2025-04-01 19:32:30.665185 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-04-01 19:32:30.665817 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.87s 2025-04-01 19:32:30.666004 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-04-01 19:32:30.666333 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-04-01 19:32:30.666697 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2025-04-01 19:32:30.667024 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-04-01 19:32:33.168942 | orchestrator | 2025-04-01 19:32:33 | INFO  | Task ae0993e0-0b2e-468e-a634-bed5658cca29 (facts) was prepared for execution. 2025-04-01 19:32:36.569615 | orchestrator | 2025-04-01 19:32:33 | INFO  | It takes a moment until task ae0993e0-0b2e-468e-a634-bed5658cca29 (facts) has been started and output is visible here. 2025-04-01 19:32:36.569745 | orchestrator | 2025-04-01 19:32:36.573518 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-01 19:32:36.573557 | orchestrator | 2025-04-01 19:32:37.590161 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-01 19:32:37.590239 | orchestrator | Tuesday 01 April 2025 19:32:36 +0000 (0:00:00.222) 0:00:00.222 ********* 2025-04-01 19:32:37.590268 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:32:37.590771 | orchestrator | ok: [testbed-manager] 2025-04-01 19:32:37.591259 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:32:37.595262 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:32:37.595659 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:32:37.596291 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:32:37.596411 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:32:37.598277 | orchestrator | 2025-04-01 19:32:37.598766 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-01 19:32:37.598999 | orchestrator | Tuesday 01 April 2025 19:32:37 +0000 (0:00:01.017) 0:00:01.239 ********* 2025-04-01 19:32:37.762906 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:32:37.861265 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:32:37.939821 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:32:38.016277 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:32:38.089594 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:38.749865 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:32:38.750260 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:32:38.751795 | orchestrator | 2025-04-01 19:32:38.756499 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-01 19:32:38.757191 | orchestrator | 2025-04-01 19:32:38.757917 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-01 19:32:38.757943 | orchestrator | Tuesday 01 April 2025 19:32:38 +0000 (0:00:01.167) 0:00:02.407 ********* 2025-04-01 19:32:43.615431 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:32:43.616514 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:32:43.616551 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:32:43.617796 | orchestrator | ok: [testbed-manager] 2025-04-01 19:32:43.618691 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:32:43.619285 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:32:43.620728 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:32:43.621094 | orchestrator | 2025-04-01 19:32:43.622104 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-01 19:32:43.623187 | orchestrator | 2025-04-01 19:32:43.623899 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-01 19:32:43.624756 | orchestrator | Tuesday 01 April 2025 19:32:43 +0000 (0:00:04.864) 0:00:07.271 ********* 2025-04-01 19:32:43.990357 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:32:44.081131 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:32:44.159347 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:32:44.238921 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:32:44.323553 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:44.378370 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:32:44.378885 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:32:44.378925 | orchestrator | 2025-04-01 19:32:44.379145 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:32:44.379750 | orchestrator | 2025-04-01 19:32:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:32:44.379991 | orchestrator | 2025-04-01 19:32:44 | INFO  | Please wait and do not abort execution. 2025-04-01 19:32:44.380635 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.380950 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.381484 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.381744 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.383396 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.384549 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.384575 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:32:44.384618 | orchestrator | 2025-04-01 19:32:44.384640 | orchestrator | Tuesday 01 April 2025 19:32:44 +0000 (0:00:00.763) 0:00:08.034 ********* 2025-04-01 19:32:44.385086 | orchestrator | =============================================================================== 2025-04-01 19:32:44.385597 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2025-04-01 19:32:44.386245 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2025-04-01 19:32:44.386705 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2025-04-01 19:32:44.387024 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2025-04-01 19:32:46.794348 | orchestrator | 2025-04-01 19:32:46 | INFO  | Task f930f7a4-d7b1-4ee7-98da-c6867f0b46aa (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-01 19:32:50.887204 | orchestrator | 2025-04-01 19:32:46 | INFO  | It takes a moment until task f930f7a4-d7b1-4ee7-98da-c6867f0b46aa (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-01 19:32:50.887289 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-01 19:32:51.635795 | orchestrator | 2025-04-01 19:32:51.636457 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-01 19:32:51.636894 | orchestrator | 2025-04-01 19:32:51.637252 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-01 19:32:51.637741 | orchestrator | Tuesday 01 April 2025 19:32:51 +0000 (0:00:00.645) 0:00:00.645 ********* 2025-04-01 19:32:51.954125 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-01 19:32:51.957221 | orchestrator | 2025-04-01 19:32:51.957659 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-01 19:32:51.958156 | orchestrator | Tuesday 01 April 2025 19:32:51 +0000 (0:00:00.326) 0:00:00.971 ********* 2025-04-01 19:32:52.197346 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:32:52.197827 | orchestrator | 2025-04-01 19:32:52.199966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:52.200838 | orchestrator | Tuesday 01 April 2025 19:32:52 +0000 (0:00:00.244) 0:00:01.216 ********* 2025-04-01 19:32:52.825736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-01 19:32:52.828477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-01 19:32:52.829126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-01 19:32:52.829405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-01 19:32:52.829927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-01 19:32:52.830436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-01 19:32:52.830745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-01 19:32:52.831116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-01 19:32:52.834461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-01 19:32:52.834904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-01 19:32:52.834931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-01 19:32:52.835375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-01 19:32:52.835779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-01 19:32:52.836273 | orchestrator | 2025-04-01 19:32:52.836553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:52.836723 | orchestrator | Tuesday 01 April 2025 19:32:52 +0000 (0:00:00.629) 0:00:01.845 ********* 2025-04-01 19:32:53.110559 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:53.113541 | orchestrator | 2025-04-01 19:32:53.114437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:53.114900 | orchestrator | Tuesday 01 April 2025 19:32:53 +0000 (0:00:00.282) 0:00:02.128 ********* 2025-04-01 19:32:53.406675 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:53.407127 | orchestrator | 2025-04-01 19:32:53.407208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:53.407618 | orchestrator | Tuesday 01 April 2025 19:32:53 +0000 (0:00:00.295) 0:00:02.424 ********* 2025-04-01 19:32:53.746415 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:53.746699 | orchestrator | 2025-04-01 19:32:53.746728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:53.746749 | orchestrator | Tuesday 01 April 2025 19:32:53 +0000 (0:00:00.340) 0:00:02.765 ********* 2025-04-01 19:32:54.018100 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:54.018504 | orchestrator | 2025-04-01 19:32:54.018773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:54.019087 | orchestrator | Tuesday 01 April 2025 19:32:54 +0000 (0:00:00.267) 0:00:03.033 ********* 2025-04-01 19:32:54.312445 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:54.313120 | orchestrator | 2025-04-01 19:32:54.313341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:54.314864 | orchestrator | Tuesday 01 April 2025 19:32:54 +0000 (0:00:00.297) 0:00:03.330 ********* 2025-04-01 19:32:54.545500 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:54.545997 | orchestrator | 2025-04-01 19:32:54.546355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:54.546387 | orchestrator | Tuesday 01 April 2025 19:32:54 +0000 (0:00:00.233) 0:00:03.564 ********* 2025-04-01 19:32:54.738466 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:54.738586 | orchestrator | 2025-04-01 19:32:54.739187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:54.739483 | orchestrator | Tuesday 01 April 2025 19:32:54 +0000 (0:00:00.191) 0:00:03.756 ********* 2025-04-01 19:32:55.125705 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:55.125830 | orchestrator | 2025-04-01 19:32:55.126166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:55.126461 | orchestrator | Tuesday 01 April 2025 19:32:55 +0000 (0:00:00.386) 0:00:04.142 ********* 2025-04-01 19:32:55.813659 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e) 2025-04-01 19:32:55.814286 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e) 2025-04-01 19:32:55.816849 | orchestrator | 2025-04-01 19:32:55.816915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:55.816936 | orchestrator | Tuesday 01 April 2025 19:32:55 +0000 (0:00:00.689) 0:00:04.832 ********* 2025-04-01 19:32:56.888511 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1) 2025-04-01 19:32:56.888764 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1) 2025-04-01 19:32:56.890636 | orchestrator | 2025-04-01 19:32:56.890945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:56.891270 | orchestrator | Tuesday 01 April 2025 19:32:56 +0000 (0:00:01.073) 0:00:05.906 ********* 2025-04-01 19:32:57.421633 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72) 2025-04-01 19:32:57.423940 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72) 2025-04-01 19:32:57.424690 | orchestrator | 2025-04-01 19:32:57.425837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:57.427569 | orchestrator | Tuesday 01 April 2025 19:32:57 +0000 (0:00:00.531) 0:00:06.438 ********* 2025-04-01 19:32:57.879473 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03) 2025-04-01 19:32:57.881290 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03) 2025-04-01 19:32:57.882511 | orchestrator | 2025-04-01 19:32:57.883409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:32:57.886130 | orchestrator | Tuesday 01 April 2025 19:32:57 +0000 (0:00:00.459) 0:00:06.897 ********* 2025-04-01 19:32:58.407422 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-01 19:32:58.410999 | orchestrator | 2025-04-01 19:32:58.414174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:32:58.415875 | orchestrator | Tuesday 01 April 2025 19:32:58 +0000 (0:00:00.527) 0:00:07.425 ********* 2025-04-01 19:32:58.879024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-01 19:32:58.880980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-01 19:32:58.881491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-01 19:32:58.881750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-01 19:32:58.885528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-01 19:32:58.885673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-01 19:32:58.885841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-01 19:32:58.886127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-01 19:32:58.886581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-01 19:32:58.887269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-01 19:32:58.887469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-01 19:32:58.887881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-01 19:32:58.889429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-01 19:32:58.889810 | orchestrator | 2025-04-01 19:32:58.890104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:32:58.891847 | orchestrator | Tuesday 01 April 2025 19:32:58 +0000 (0:00:00.471) 0:00:07.897 ********* 2025-04-01 19:32:59.113205 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:59.113414 | orchestrator | 2025-04-01 19:32:59.114206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:32:59.114819 | orchestrator | Tuesday 01 April 2025 19:32:59 +0000 (0:00:00.231) 0:00:08.129 ********* 2025-04-01 19:32:59.338720 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:59.342201 | orchestrator | 2025-04-01 19:32:59.342251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:32:59.526899 | orchestrator | Tuesday 01 April 2025 19:32:59 +0000 (0:00:00.227) 0:00:08.356 ********* 2025-04-01 19:32:59.526946 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:59.527770 | orchestrator | 2025-04-01 19:32:59.529181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:32:59.534229 | orchestrator | Tuesday 01 April 2025 19:32:59 +0000 (0:00:00.188) 0:00:08.545 ********* 2025-04-01 19:32:59.756273 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:32:59.758610 | orchestrator | 2025-04-01 19:32:59.760018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:32:59.761802 | orchestrator | Tuesday 01 April 2025 19:32:59 +0000 (0:00:00.229) 0:00:08.775 ********* 2025-04-01 19:33:00.433532 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:00.433755 | orchestrator | 2025-04-01 19:33:00.434665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:00.435380 | orchestrator | Tuesday 01 April 2025 19:33:00 +0000 (0:00:00.676) 0:00:09.451 ********* 2025-04-01 19:33:00.655689 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:00.656342 | orchestrator | 2025-04-01 19:33:00.658112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:00.890240 | orchestrator | Tuesday 01 April 2025 19:33:00 +0000 (0:00:00.222) 0:00:09.674 ********* 2025-04-01 19:33:00.890338 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:00.890599 | orchestrator | 2025-04-01 19:33:00.892974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:01.135127 | orchestrator | Tuesday 01 April 2025 19:33:00 +0000 (0:00:00.233) 0:00:09.907 ********* 2025-04-01 19:33:01.135206 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:01.135564 | orchestrator | 2025-04-01 19:33:01.137016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:01.138473 | orchestrator | Tuesday 01 April 2025 19:33:01 +0000 (0:00:00.244) 0:00:10.152 ********* 2025-04-01 19:33:01.835403 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-01 19:33:01.836494 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-01 19:33:01.838403 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-01 19:33:01.839201 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-01 19:33:01.840978 | orchestrator | 2025-04-01 19:33:01.841296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:01.846681 | orchestrator | Tuesday 01 April 2025 19:33:01 +0000 (0:00:00.700) 0:00:10.853 ********* 2025-04-01 19:33:02.063084 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:02.065106 | orchestrator | 2025-04-01 19:33:02.065401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:02.069983 | orchestrator | Tuesday 01 April 2025 19:33:02 +0000 (0:00:00.227) 0:00:11.081 ********* 2025-04-01 19:33:02.274107 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:02.276614 | orchestrator | 2025-04-01 19:33:02.278487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:02.279471 | orchestrator | Tuesday 01 April 2025 19:33:02 +0000 (0:00:00.210) 0:00:11.292 ********* 2025-04-01 19:33:02.501937 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:02.503168 | orchestrator | 2025-04-01 19:33:02.504281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:02.505152 | orchestrator | Tuesday 01 April 2025 19:33:02 +0000 (0:00:00.228) 0:00:11.520 ********* 2025-04-01 19:33:02.729032 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:02.730821 | orchestrator | 2025-04-01 19:33:02.732073 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-01 19:33:02.737006 | orchestrator | Tuesday 01 April 2025 19:33:02 +0000 (0:00:00.225) 0:00:11.746 ********* 2025-04-01 19:33:02.948542 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-01 19:33:02.949810 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-01 19:33:02.952382 | orchestrator | 2025-04-01 19:33:02.953422 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-01 19:33:02.955179 | orchestrator | Tuesday 01 April 2025 19:33:02 +0000 (0:00:00.218) 0:00:11.965 ********* 2025-04-01 19:33:03.109250 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:03.110586 | orchestrator | 2025-04-01 19:33:03.111632 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-01 19:33:03.113454 | orchestrator | Tuesday 01 April 2025 19:33:03 +0000 (0:00:00.160) 0:00:12.126 ********* 2025-04-01 19:33:03.461015 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:03.462982 | orchestrator | 2025-04-01 19:33:03.465885 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-01 19:33:03.467182 | orchestrator | Tuesday 01 April 2025 19:33:03 +0000 (0:00:00.352) 0:00:12.479 ********* 2025-04-01 19:33:03.604506 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:03.606790 | orchestrator | 2025-04-01 19:33:03.607823 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-01 19:33:03.609100 | orchestrator | Tuesday 01 April 2025 19:33:03 +0000 (0:00:00.143) 0:00:12.622 ********* 2025-04-01 19:33:03.770450 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:33:03.772481 | orchestrator | 2025-04-01 19:33:03.773908 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-01 19:33:03.775202 | orchestrator | Tuesday 01 April 2025 19:33:03 +0000 (0:00:00.166) 0:00:12.788 ********* 2025-04-01 19:33:03.971955 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdd573d7-384a-5f49-8a42-9b210b6d8834'}}) 2025-04-01 19:33:03.973197 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '988d16a2-b35c-5840-9d7c-a8265d6d87f9'}}) 2025-04-01 19:33:03.973898 | orchestrator | 2025-04-01 19:33:03.974625 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-01 19:33:03.975276 | orchestrator | Tuesday 01 April 2025 19:33:03 +0000 (0:00:00.201) 0:00:12.990 ********* 2025-04-01 19:33:04.156141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdd573d7-384a-5f49-8a42-9b210b6d8834'}})  2025-04-01 19:33:04.157709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '988d16a2-b35c-5840-9d7c-a8265d6d87f9'}})  2025-04-01 19:33:04.158510 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:04.160091 | orchestrator | 2025-04-01 19:33:04.161249 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-01 19:33:04.162668 | orchestrator | Tuesday 01 April 2025 19:33:04 +0000 (0:00:00.182) 0:00:13.172 ********* 2025-04-01 19:33:04.351434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdd573d7-384a-5f49-8a42-9b210b6d8834'}})  2025-04-01 19:33:04.356990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '988d16a2-b35c-5840-9d7c-a8265d6d87f9'}})  2025-04-01 19:33:04.357818 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:04.357852 | orchestrator | 2025-04-01 19:33:04.358762 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-01 19:33:04.359598 | orchestrator | Tuesday 01 April 2025 19:33:04 +0000 (0:00:00.197) 0:00:13.369 ********* 2025-04-01 19:33:04.525668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdd573d7-384a-5f49-8a42-9b210b6d8834'}})  2025-04-01 19:33:04.526589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '988d16a2-b35c-5840-9d7c-a8265d6d87f9'}})  2025-04-01 19:33:04.528082 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:04.529019 | orchestrator | 2025-04-01 19:33:04.530092 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-01 19:33:04.531043 | orchestrator | Tuesday 01 April 2025 19:33:04 +0000 (0:00:00.173) 0:00:13.543 ********* 2025-04-01 19:33:04.672038 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:33:04.673221 | orchestrator | 2025-04-01 19:33:04.677709 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-01 19:33:04.678164 | orchestrator | Tuesday 01 April 2025 19:33:04 +0000 (0:00:00.146) 0:00:13.689 ********* 2025-04-01 19:33:04.824230 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:33:04.826236 | orchestrator | 2025-04-01 19:33:04.826609 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-01 19:33:04.835981 | orchestrator | Tuesday 01 April 2025 19:33:04 +0000 (0:00:00.152) 0:00:13.842 ********* 2025-04-01 19:33:04.978181 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:04.980066 | orchestrator | 2025-04-01 19:33:04.981146 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-01 19:33:04.982508 | orchestrator | Tuesday 01 April 2025 19:33:04 +0000 (0:00:00.153) 0:00:13.995 ********* 2025-04-01 19:33:05.117340 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:05.118537 | orchestrator | 2025-04-01 19:33:05.120101 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-01 19:33:05.124385 | orchestrator | Tuesday 01 April 2025 19:33:05 +0000 (0:00:00.138) 0:00:14.133 ********* 2025-04-01 19:33:05.487053 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:05.488104 | orchestrator | 2025-04-01 19:33:05.634564 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-01 19:33:05.634617 | orchestrator | Tuesday 01 April 2025 19:33:05 +0000 (0:00:00.371) 0:00:14.505 ********* 2025-04-01 19:33:05.634639 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:33:05.634902 | orchestrator |  "ceph_osd_devices": { 2025-04-01 19:33:05.636031 | orchestrator |  "sdb": { 2025-04-01 19:33:05.636573 | orchestrator |  "osd_lvm_uuid": "bdd573d7-384a-5f49-8a42-9b210b6d8834" 2025-04-01 19:33:05.637449 | orchestrator |  }, 2025-04-01 19:33:05.641326 | orchestrator |  "sdc": { 2025-04-01 19:33:05.642325 | orchestrator |  "osd_lvm_uuid": "988d16a2-b35c-5840-9d7c-a8265d6d87f9" 2025-04-01 19:33:05.643111 | orchestrator |  } 2025-04-01 19:33:05.643780 | orchestrator |  } 2025-04-01 19:33:05.644701 | orchestrator | } 2025-04-01 19:33:05.645364 | orchestrator | 2025-04-01 19:33:05.646109 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-01 19:33:05.648079 | orchestrator | Tuesday 01 April 2025 19:33:05 +0000 (0:00:00.147) 0:00:14.653 ********* 2025-04-01 19:33:05.798584 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:05.799985 | orchestrator | 2025-04-01 19:33:05.801257 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-01 19:33:05.802703 | orchestrator | Tuesday 01 April 2025 19:33:05 +0000 (0:00:00.163) 0:00:14.816 ********* 2025-04-01 19:33:05.947032 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:05.948812 | orchestrator | 2025-04-01 19:33:05.949092 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-01 19:33:05.952744 | orchestrator | Tuesday 01 April 2025 19:33:05 +0000 (0:00:00.148) 0:00:14.965 ********* 2025-04-01 19:33:06.106507 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:33:06.106968 | orchestrator | 2025-04-01 19:33:06.107002 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-01 19:33:06.107138 | orchestrator | Tuesday 01 April 2025 19:33:06 +0000 (0:00:00.160) 0:00:15.125 ********* 2025-04-01 19:33:06.391634 | orchestrator | changed: [testbed-node-3] => { 2025-04-01 19:33:06.394068 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-01 19:33:06.394116 | orchestrator |  "ceph_osd_devices": { 2025-04-01 19:33:06.396433 | orchestrator |  "sdb": { 2025-04-01 19:33:06.397483 | orchestrator |  "osd_lvm_uuid": "bdd573d7-384a-5f49-8a42-9b210b6d8834" 2025-04-01 19:33:06.398518 | orchestrator |  }, 2025-04-01 19:33:06.399510 | orchestrator |  "sdc": { 2025-04-01 19:33:06.400079 | orchestrator |  "osd_lvm_uuid": "988d16a2-b35c-5840-9d7c-a8265d6d87f9" 2025-04-01 19:33:06.401028 | orchestrator |  } 2025-04-01 19:33:06.402011 | orchestrator |  }, 2025-04-01 19:33:06.402805 | orchestrator |  "lvm_volumes": [ 2025-04-01 19:33:06.403247 | orchestrator |  { 2025-04-01 19:33:06.403996 | orchestrator |  "data": "osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834", 2025-04-01 19:33:06.404744 | orchestrator |  "data_vg": "ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834" 2025-04-01 19:33:06.405097 | orchestrator |  }, 2025-04-01 19:33:06.405774 | orchestrator |  { 2025-04-01 19:33:06.406266 | orchestrator |  "data": "osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9", 2025-04-01 19:33:06.406816 | orchestrator |  "data_vg": "ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9" 2025-04-01 19:33:06.407529 | orchestrator |  } 2025-04-01 19:33:06.407927 | orchestrator |  ] 2025-04-01 19:33:06.408290 | orchestrator |  } 2025-04-01 19:33:06.408721 | orchestrator | } 2025-04-01 19:33:06.409340 | orchestrator | 2025-04-01 19:33:06.409632 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-01 19:33:06.410280 | orchestrator | Tuesday 01 April 2025 19:33:06 +0000 (0:00:00.280) 0:00:15.406 ********* 2025-04-01 19:33:08.835417 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-01 19:33:08.836291 | orchestrator | 2025-04-01 19:33:08.838103 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-01 19:33:08.838137 | orchestrator | 2025-04-01 19:33:08.838764 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-01 19:33:08.839971 | orchestrator | Tuesday 01 April 2025 19:33:08 +0000 (0:00:02.444) 0:00:17.851 ********* 2025-04-01 19:33:09.103741 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-01 19:33:09.104794 | orchestrator | 2025-04-01 19:33:09.109255 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-01 19:33:09.390577 | orchestrator | Tuesday 01 April 2025 19:33:09 +0000 (0:00:00.270) 0:00:18.121 ********* 2025-04-01 19:33:09.390656 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:33:09.391146 | orchestrator | 2025-04-01 19:33:09.392698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:09.396175 | orchestrator | Tuesday 01 April 2025 19:33:09 +0000 (0:00:00.285) 0:00:18.407 ********* 2025-04-01 19:33:09.822879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-01 19:33:09.823444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-01 19:33:09.823490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-01 19:33:09.824594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-01 19:33:09.825804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-01 19:33:09.825837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-01 19:33:09.826775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-01 19:33:09.827944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-01 19:33:09.829079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-01 19:33:09.829112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-01 19:33:09.829974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-01 19:33:09.830816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-01 19:33:09.831548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-01 19:33:09.832645 | orchestrator | 2025-04-01 19:33:09.833001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:09.835478 | orchestrator | Tuesday 01 April 2025 19:33:09 +0000 (0:00:00.432) 0:00:18.839 ********* 2025-04-01 19:33:10.048285 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:10.049384 | orchestrator | 2025-04-01 19:33:10.049425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:10.053132 | orchestrator | Tuesday 01 April 2025 19:33:10 +0000 (0:00:00.217) 0:00:19.056 ********* 2025-04-01 19:33:10.271807 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:10.272547 | orchestrator | 2025-04-01 19:33:10.273085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:10.273572 | orchestrator | Tuesday 01 April 2025 19:33:10 +0000 (0:00:00.233) 0:00:19.289 ********* 2025-04-01 19:33:10.546475 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:10.547243 | orchestrator | 2025-04-01 19:33:10.547496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:10.548470 | orchestrator | Tuesday 01 April 2025 19:33:10 +0000 (0:00:00.275) 0:00:19.565 ********* 2025-04-01 19:33:11.211803 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:11.212050 | orchestrator | 2025-04-01 19:33:11.212976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:11.216862 | orchestrator | Tuesday 01 April 2025 19:33:11 +0000 (0:00:00.663) 0:00:20.228 ********* 2025-04-01 19:33:11.496348 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:11.730910 | orchestrator | 2025-04-01 19:33:11.731005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:11.731032 | orchestrator | Tuesday 01 April 2025 19:33:11 +0000 (0:00:00.275) 0:00:20.503 ********* 2025-04-01 19:33:11.731062 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:11.732836 | orchestrator | 2025-04-01 19:33:11.737004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:11.738530 | orchestrator | Tuesday 01 April 2025 19:33:11 +0000 (0:00:00.244) 0:00:20.748 ********* 2025-04-01 19:33:11.965545 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:11.969582 | orchestrator | 2025-04-01 19:33:11.970902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:11.974876 | orchestrator | Tuesday 01 April 2025 19:33:11 +0000 (0:00:00.233) 0:00:20.982 ********* 2025-04-01 19:33:12.202220 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:12.209145 | orchestrator | 2025-04-01 19:33:12.209619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:12.211586 | orchestrator | Tuesday 01 April 2025 19:33:12 +0000 (0:00:00.237) 0:00:21.219 ********* 2025-04-01 19:33:12.658968 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891) 2025-04-01 19:33:12.659413 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891) 2025-04-01 19:33:12.660087 | orchestrator | 2025-04-01 19:33:12.660909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:12.661949 | orchestrator | Tuesday 01 April 2025 19:33:12 +0000 (0:00:00.458) 0:00:21.677 ********* 2025-04-01 19:33:13.138996 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c) 2025-04-01 19:33:13.140894 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c) 2025-04-01 19:33:13.142178 | orchestrator | 2025-04-01 19:33:13.146093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:13.150100 | orchestrator | Tuesday 01 April 2025 19:33:13 +0000 (0:00:00.475) 0:00:22.153 ********* 2025-04-01 19:33:13.677279 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c) 2025-04-01 19:33:13.678786 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c) 2025-04-01 19:33:13.679391 | orchestrator | 2025-04-01 19:33:13.685571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:13.687814 | orchestrator | Tuesday 01 April 2025 19:33:13 +0000 (0:00:00.541) 0:00:22.695 ********* 2025-04-01 19:33:14.492789 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905) 2025-04-01 19:33:14.493369 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905) 2025-04-01 19:33:14.495224 | orchestrator | 2025-04-01 19:33:14.495997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:14.500017 | orchestrator | Tuesday 01 April 2025 19:33:14 +0000 (0:00:00.813) 0:00:23.509 ********* 2025-04-01 19:33:15.346948 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-01 19:33:15.347491 | orchestrator | 2025-04-01 19:33:15.348127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:15.348728 | orchestrator | Tuesday 01 April 2025 19:33:15 +0000 (0:00:00.855) 0:00:24.365 ********* 2025-04-01 19:33:15.825812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-01 19:33:15.826072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-01 19:33:15.826830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-01 19:33:15.827568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-01 19:33:15.828086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-01 19:33:15.828490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-01 19:33:15.829245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-01 19:33:15.829424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-01 19:33:15.829827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-01 19:33:15.830628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-01 19:33:15.831007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-01 19:33:15.831571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-01 19:33:15.832021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-01 19:33:15.832388 | orchestrator | 2025-04-01 19:33:15.833880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:15.835090 | orchestrator | Tuesday 01 April 2025 19:33:15 +0000 (0:00:00.479) 0:00:24.844 ********* 2025-04-01 19:33:16.038245 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:16.038828 | orchestrator | 2025-04-01 19:33:16.039836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:16.041642 | orchestrator | Tuesday 01 April 2025 19:33:16 +0000 (0:00:00.210) 0:00:25.055 ********* 2025-04-01 19:33:16.245182 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:16.246368 | orchestrator | 2025-04-01 19:33:16.249494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:16.479864 | orchestrator | Tuesday 01 April 2025 19:33:16 +0000 (0:00:00.207) 0:00:25.262 ********* 2025-04-01 19:33:16.479948 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:16.480199 | orchestrator | 2025-04-01 19:33:16.482419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:16.706514 | orchestrator | Tuesday 01 April 2025 19:33:16 +0000 (0:00:00.233) 0:00:25.496 ********* 2025-04-01 19:33:16.706574 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:16.707739 | orchestrator | 2025-04-01 19:33:16.707767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:16.708166 | orchestrator | Tuesday 01 April 2025 19:33:16 +0000 (0:00:00.228) 0:00:25.724 ********* 2025-04-01 19:33:16.924485 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:16.924650 | orchestrator | 2025-04-01 19:33:16.925443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:16.926080 | orchestrator | Tuesday 01 April 2025 19:33:16 +0000 (0:00:00.217) 0:00:25.942 ********* 2025-04-01 19:33:17.180446 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:17.181435 | orchestrator | 2025-04-01 19:33:17.182420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:17.183372 | orchestrator | Tuesday 01 April 2025 19:33:17 +0000 (0:00:00.255) 0:00:26.197 ********* 2025-04-01 19:33:17.405944 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:17.407248 | orchestrator | 2025-04-01 19:33:17.407961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:17.410336 | orchestrator | Tuesday 01 April 2025 19:33:17 +0000 (0:00:00.226) 0:00:26.423 ********* 2025-04-01 19:33:17.630929 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:17.631636 | orchestrator | 2025-04-01 19:33:17.632871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:17.634918 | orchestrator | Tuesday 01 April 2025 19:33:17 +0000 (0:00:00.224) 0:00:26.648 ********* 2025-04-01 19:33:18.800040 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-01 19:33:18.800514 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-01 19:33:18.802224 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-01 19:33:18.803051 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-01 19:33:18.805554 | orchestrator | 2025-04-01 19:33:19.040932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:19.041015 | orchestrator | Tuesday 01 April 2025 19:33:18 +0000 (0:00:01.168) 0:00:27.817 ********* 2025-04-01 19:33:19.041044 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:19.041759 | orchestrator | 2025-04-01 19:33:19.042440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:19.044433 | orchestrator | Tuesday 01 April 2025 19:33:19 +0000 (0:00:00.239) 0:00:28.056 ********* 2025-04-01 19:33:19.270193 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:19.270792 | orchestrator | 2025-04-01 19:33:19.271665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:19.272345 | orchestrator | Tuesday 01 April 2025 19:33:19 +0000 (0:00:00.231) 0:00:28.288 ********* 2025-04-01 19:33:19.527866 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:19.528413 | orchestrator | 2025-04-01 19:33:19.531534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:19.532588 | orchestrator | Tuesday 01 April 2025 19:33:19 +0000 (0:00:00.256) 0:00:28.544 ********* 2025-04-01 19:33:19.748246 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:19.750179 | orchestrator | 2025-04-01 19:33:19.750918 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-01 19:33:19.754065 | orchestrator | Tuesday 01 April 2025 19:33:19 +0000 (0:00:00.220) 0:00:28.765 ********* 2025-04-01 19:33:19.944042 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-01 19:33:19.944828 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-01 19:33:19.945149 | orchestrator | 2025-04-01 19:33:19.945635 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-01 19:33:19.946059 | orchestrator | Tuesday 01 April 2025 19:33:19 +0000 (0:00:00.196) 0:00:28.962 ********* 2025-04-01 19:33:20.105719 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:20.107406 | orchestrator | 2025-04-01 19:33:20.108378 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-01 19:33:20.109286 | orchestrator | Tuesday 01 April 2025 19:33:20 +0000 (0:00:00.160) 0:00:29.123 ********* 2025-04-01 19:33:20.260122 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:20.260875 | orchestrator | 2025-04-01 19:33:20.262248 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-01 19:33:20.262442 | orchestrator | Tuesday 01 April 2025 19:33:20 +0000 (0:00:00.155) 0:00:29.278 ********* 2025-04-01 19:33:20.416502 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:20.416940 | orchestrator | 2025-04-01 19:33:20.419195 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-01 19:33:20.420020 | orchestrator | Tuesday 01 April 2025 19:33:20 +0000 (0:00:00.155) 0:00:29.434 ********* 2025-04-01 19:33:20.551518 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:33:20.553012 | orchestrator | 2025-04-01 19:33:20.554214 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-01 19:33:20.554515 | orchestrator | Tuesday 01 April 2025 19:33:20 +0000 (0:00:00.136) 0:00:29.570 ********* 2025-04-01 19:33:20.739916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52229b2b-1fb5-50ba-ad18-deadbd92af76'}}) 2025-04-01 19:33:20.741292 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9675d24-a7d4-5c32-a36a-48aa524d4563'}}) 2025-04-01 19:33:20.743075 | orchestrator | 2025-04-01 19:33:20.744465 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-01 19:33:20.745780 | orchestrator | Tuesday 01 April 2025 19:33:20 +0000 (0:00:00.187) 0:00:29.757 ********* 2025-04-01 19:33:21.156727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52229b2b-1fb5-50ba-ad18-deadbd92af76'}})  2025-04-01 19:33:21.160167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9675d24-a7d4-5c32-a36a-48aa524d4563'}})  2025-04-01 19:33:21.161658 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:21.162935 | orchestrator | 2025-04-01 19:33:21.164230 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-01 19:33:21.165761 | orchestrator | Tuesday 01 April 2025 19:33:21 +0000 (0:00:00.415) 0:00:30.172 ********* 2025-04-01 19:33:21.337191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52229b2b-1fb5-50ba-ad18-deadbd92af76'}})  2025-04-01 19:33:21.338203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9675d24-a7d4-5c32-a36a-48aa524d4563'}})  2025-04-01 19:33:21.340100 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:21.341136 | orchestrator | 2025-04-01 19:33:21.342067 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-01 19:33:21.342422 | orchestrator | Tuesday 01 April 2025 19:33:21 +0000 (0:00:00.181) 0:00:30.354 ********* 2025-04-01 19:33:21.513862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52229b2b-1fb5-50ba-ad18-deadbd92af76'}})  2025-04-01 19:33:21.514982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9675d24-a7d4-5c32-a36a-48aa524d4563'}})  2025-04-01 19:33:21.515013 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:21.515048 | orchestrator | 2025-04-01 19:33:21.515831 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-01 19:33:21.515940 | orchestrator | Tuesday 01 April 2025 19:33:21 +0000 (0:00:00.177) 0:00:30.532 ********* 2025-04-01 19:33:21.663884 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:33:21.664719 | orchestrator | 2025-04-01 19:33:21.665833 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-01 19:33:21.669346 | orchestrator | Tuesday 01 April 2025 19:33:21 +0000 (0:00:00.149) 0:00:30.682 ********* 2025-04-01 19:33:21.823336 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:33:21.825342 | orchestrator | 2025-04-01 19:33:21.826912 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-01 19:33:21.965953 | orchestrator | Tuesday 01 April 2025 19:33:21 +0000 (0:00:00.159) 0:00:30.841 ********* 2025-04-01 19:33:21.966066 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:21.968916 | orchestrator | 2025-04-01 19:33:21.970107 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-01 19:33:21.971567 | orchestrator | Tuesday 01 April 2025 19:33:21 +0000 (0:00:00.142) 0:00:30.983 ********* 2025-04-01 19:33:22.118508 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:22.119580 | orchestrator | 2025-04-01 19:33:22.122206 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-01 19:33:22.266547 | orchestrator | Tuesday 01 April 2025 19:33:22 +0000 (0:00:00.151) 0:00:31.135 ********* 2025-04-01 19:33:22.266618 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:22.266955 | orchestrator | 2025-04-01 19:33:22.268577 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-01 19:33:22.268986 | orchestrator | Tuesday 01 April 2025 19:33:22 +0000 (0:00:00.148) 0:00:31.284 ********* 2025-04-01 19:33:22.444790 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:33:22.446139 | orchestrator |  "ceph_osd_devices": { 2025-04-01 19:33:22.446725 | orchestrator |  "sdb": { 2025-04-01 19:33:22.449773 | orchestrator |  "osd_lvm_uuid": "52229b2b-1fb5-50ba-ad18-deadbd92af76" 2025-04-01 19:33:22.451261 | orchestrator |  }, 2025-04-01 19:33:22.452463 | orchestrator |  "sdc": { 2025-04-01 19:33:22.453355 | orchestrator |  "osd_lvm_uuid": "b9675d24-a7d4-5c32-a36a-48aa524d4563" 2025-04-01 19:33:22.454236 | orchestrator |  } 2025-04-01 19:33:22.454807 | orchestrator |  } 2025-04-01 19:33:22.455365 | orchestrator | } 2025-04-01 19:33:22.455897 | orchestrator | 2025-04-01 19:33:22.456682 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-01 19:33:22.457136 | orchestrator | Tuesday 01 April 2025 19:33:22 +0000 (0:00:00.178) 0:00:31.463 ********* 2025-04-01 19:33:22.590473 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:22.591468 | orchestrator | 2025-04-01 19:33:22.592639 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-01 19:33:22.594232 | orchestrator | Tuesday 01 April 2025 19:33:22 +0000 (0:00:00.145) 0:00:31.608 ********* 2025-04-01 19:33:22.746524 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:22.746744 | orchestrator | 2025-04-01 19:33:22.748376 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-01 19:33:22.748742 | orchestrator | Tuesday 01 April 2025 19:33:22 +0000 (0:00:00.156) 0:00:31.765 ********* 2025-04-01 19:33:22.887226 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:33:22.887388 | orchestrator | 2025-04-01 19:33:22.887416 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-01 19:33:22.887940 | orchestrator | Tuesday 01 April 2025 19:33:22 +0000 (0:00:00.140) 0:00:31.905 ********* 2025-04-01 19:33:23.396512 | orchestrator | changed: [testbed-node-4] => { 2025-04-01 19:33:23.397331 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-01 19:33:23.401114 | orchestrator |  "ceph_osd_devices": { 2025-04-01 19:33:23.402205 | orchestrator |  "sdb": { 2025-04-01 19:33:23.402237 | orchestrator |  "osd_lvm_uuid": "52229b2b-1fb5-50ba-ad18-deadbd92af76" 2025-04-01 19:33:23.402415 | orchestrator |  }, 2025-04-01 19:33:23.403087 | orchestrator |  "sdc": { 2025-04-01 19:33:23.403813 | orchestrator |  "osd_lvm_uuid": "b9675d24-a7d4-5c32-a36a-48aa524d4563" 2025-04-01 19:33:23.404549 | orchestrator |  } 2025-04-01 19:33:23.405226 | orchestrator |  }, 2025-04-01 19:33:23.405712 | orchestrator |  "lvm_volumes": [ 2025-04-01 19:33:23.406434 | orchestrator |  { 2025-04-01 19:33:23.406841 | orchestrator |  "data": "osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76", 2025-04-01 19:33:23.407597 | orchestrator |  "data_vg": "ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76" 2025-04-01 19:33:23.407760 | orchestrator |  }, 2025-04-01 19:33:23.408016 | orchestrator |  { 2025-04-01 19:33:23.408384 | orchestrator |  "data": "osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563", 2025-04-01 19:33:23.408749 | orchestrator |  "data_vg": "ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563" 2025-04-01 19:33:23.409582 | orchestrator |  } 2025-04-01 19:33:23.409925 | orchestrator |  ] 2025-04-01 19:33:23.410524 | orchestrator |  } 2025-04-01 19:33:23.410839 | orchestrator | } 2025-04-01 19:33:23.410868 | orchestrator | 2025-04-01 19:33:23.411446 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-01 19:33:23.411821 | orchestrator | Tuesday 01 April 2025 19:33:23 +0000 (0:00:00.507) 0:00:32.413 ********* 2025-04-01 19:33:24.955772 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-01 19:33:24.956135 | orchestrator | 2025-04-01 19:33:24.956731 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-01 19:33:24.959107 | orchestrator | 2025-04-01 19:33:25.193728 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-01 19:33:25.193782 | orchestrator | Tuesday 01 April 2025 19:33:24 +0000 (0:00:01.560) 0:00:33.973 ********* 2025-04-01 19:33:25.193806 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-01 19:33:25.194680 | orchestrator | 2025-04-01 19:33:25.899349 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-01 19:33:25.899427 | orchestrator | Tuesday 01 April 2025 19:33:25 +0000 (0:00:00.238) 0:00:34.211 ********* 2025-04-01 19:33:25.899482 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:33:25.901869 | orchestrator | 2025-04-01 19:33:25.903558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:25.904288 | orchestrator | Tuesday 01 April 2025 19:33:25 +0000 (0:00:00.705) 0:00:34.917 ********* 2025-04-01 19:33:26.347033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-01 19:33:26.347506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-01 19:33:26.349125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-01 19:33:26.352094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-01 19:33:26.353125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-01 19:33:26.355205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-01 19:33:26.355645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-01 19:33:26.356824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-01 19:33:26.357636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-01 19:33:26.358069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-01 19:33:26.358965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-01 19:33:26.359781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-01 19:33:26.360068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-01 19:33:26.361139 | orchestrator | 2025-04-01 19:33:26.362195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:26.362517 | orchestrator | Tuesday 01 April 2025 19:33:26 +0000 (0:00:00.445) 0:00:35.362 ********* 2025-04-01 19:33:26.592021 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:26.592439 | orchestrator | 2025-04-01 19:33:26.592762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:26.593192 | orchestrator | Tuesday 01 April 2025 19:33:26 +0000 (0:00:00.245) 0:00:35.608 ********* 2025-04-01 19:33:26.810742 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:26.811384 | orchestrator | 2025-04-01 19:33:26.811685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:26.812993 | orchestrator | Tuesday 01 April 2025 19:33:26 +0000 (0:00:00.219) 0:00:35.827 ********* 2025-04-01 19:33:27.021160 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:27.022929 | orchestrator | 2025-04-01 19:33:27.024281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:27.026112 | orchestrator | Tuesday 01 April 2025 19:33:27 +0000 (0:00:00.210) 0:00:36.038 ********* 2025-04-01 19:33:27.245150 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:27.246162 | orchestrator | 2025-04-01 19:33:27.246817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:27.247627 | orchestrator | Tuesday 01 April 2025 19:33:27 +0000 (0:00:00.224) 0:00:36.263 ********* 2025-04-01 19:33:27.458936 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:27.460959 | orchestrator | 2025-04-01 19:33:27.461612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:27.462970 | orchestrator | Tuesday 01 April 2025 19:33:27 +0000 (0:00:00.214) 0:00:36.477 ********* 2025-04-01 19:33:27.680038 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:27.680543 | orchestrator | 2025-04-01 19:33:27.681351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:27.682392 | orchestrator | Tuesday 01 April 2025 19:33:27 +0000 (0:00:00.220) 0:00:36.698 ********* 2025-04-01 19:33:27.945950 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:27.946229 | orchestrator | 2025-04-01 19:33:27.946260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:27.946282 | orchestrator | Tuesday 01 April 2025 19:33:27 +0000 (0:00:00.265) 0:00:36.964 ********* 2025-04-01 19:33:28.169427 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:28.170162 | orchestrator | 2025-04-01 19:33:28.170924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:28.171979 | orchestrator | Tuesday 01 April 2025 19:33:28 +0000 (0:00:00.221) 0:00:37.185 ********* 2025-04-01 19:33:28.863730 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c) 2025-04-01 19:33:28.864505 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c) 2025-04-01 19:33:28.866522 | orchestrator | 2025-04-01 19:33:28.866787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:28.867924 | orchestrator | Tuesday 01 April 2025 19:33:28 +0000 (0:00:00.689) 0:00:37.875 ********* 2025-04-01 19:33:29.355263 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7) 2025-04-01 19:33:29.357129 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7) 2025-04-01 19:33:29.357176 | orchestrator | 2025-04-01 19:33:29.358123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:29.359123 | orchestrator | Tuesday 01 April 2025 19:33:29 +0000 (0:00:00.493) 0:00:38.369 ********* 2025-04-01 19:33:29.821218 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c) 2025-04-01 19:33:29.821564 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c) 2025-04-01 19:33:29.822123 | orchestrator | 2025-04-01 19:33:29.822163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:29.822771 | orchestrator | Tuesday 01 April 2025 19:33:29 +0000 (0:00:00.468) 0:00:38.838 ********* 2025-04-01 19:33:30.343635 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8) 2025-04-01 19:33:30.345528 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8) 2025-04-01 19:33:30.345806 | orchestrator | 2025-04-01 19:33:30.348712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:33:30.756742 | orchestrator | Tuesday 01 April 2025 19:33:30 +0000 (0:00:00.523) 0:00:39.361 ********* 2025-04-01 19:33:30.756825 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-01 19:33:30.757777 | orchestrator | 2025-04-01 19:33:30.758566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:30.759364 | orchestrator | Tuesday 01 April 2025 19:33:30 +0000 (0:00:00.412) 0:00:39.774 ********* 2025-04-01 19:33:31.295827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-01 19:33:31.296488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-01 19:33:31.297371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-01 19:33:31.298831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-01 19:33:31.299069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-01 19:33:31.299609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-01 19:33:31.300606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-01 19:33:31.301531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-01 19:33:31.302244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-01 19:33:31.303128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-01 19:33:31.303902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-01 19:33:31.304384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-01 19:33:31.305065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-01 19:33:31.305665 | orchestrator | 2025-04-01 19:33:31.306080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:31.306397 | orchestrator | Tuesday 01 April 2025 19:33:31 +0000 (0:00:00.539) 0:00:40.314 ********* 2025-04-01 19:33:31.535346 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:31.536298 | orchestrator | 2025-04-01 19:33:31.537409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:31.538326 | orchestrator | Tuesday 01 April 2025 19:33:31 +0000 (0:00:00.237) 0:00:40.551 ********* 2025-04-01 19:33:31.745792 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:31.746717 | orchestrator | 2025-04-01 19:33:31.748174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:31.748674 | orchestrator | Tuesday 01 April 2025 19:33:31 +0000 (0:00:00.212) 0:00:40.764 ********* 2025-04-01 19:33:31.963220 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:31.963489 | orchestrator | 2025-04-01 19:33:31.963560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:31.964613 | orchestrator | Tuesday 01 April 2025 19:33:31 +0000 (0:00:00.217) 0:00:40.982 ********* 2025-04-01 19:33:32.664822 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:32.664986 | orchestrator | 2025-04-01 19:33:32.666545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:32.667370 | orchestrator | Tuesday 01 April 2025 19:33:32 +0000 (0:00:00.699) 0:00:41.681 ********* 2025-04-01 19:33:32.901534 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:32.902194 | orchestrator | 2025-04-01 19:33:32.902619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:32.902650 | orchestrator | Tuesday 01 April 2025 19:33:32 +0000 (0:00:00.236) 0:00:41.918 ********* 2025-04-01 19:33:33.141673 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:33.141899 | orchestrator | 2025-04-01 19:33:33.141926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:33.141946 | orchestrator | Tuesday 01 April 2025 19:33:33 +0000 (0:00:00.239) 0:00:42.158 ********* 2025-04-01 19:33:33.358007 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:33.358515 | orchestrator | 2025-04-01 19:33:33.360375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:33.572359 | orchestrator | Tuesday 01 April 2025 19:33:33 +0000 (0:00:00.216) 0:00:42.374 ********* 2025-04-01 19:33:33.572396 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:33.573066 | orchestrator | 2025-04-01 19:33:33.573537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:34.334371 | orchestrator | Tuesday 01 April 2025 19:33:33 +0000 (0:00:00.216) 0:00:42.591 ********* 2025-04-01 19:33:34.334464 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-01 19:33:34.334863 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-01 19:33:34.334892 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-01 19:33:34.336201 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-01 19:33:34.336231 | orchestrator | 2025-04-01 19:33:34.336934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:34.340187 | orchestrator | Tuesday 01 April 2025 19:33:34 +0000 (0:00:00.761) 0:00:43.352 ********* 2025-04-01 19:33:34.557700 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:34.557792 | orchestrator | 2025-04-01 19:33:34.558102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:34.558523 | orchestrator | Tuesday 01 April 2025 19:33:34 +0000 (0:00:00.223) 0:00:43.576 ********* 2025-04-01 19:33:34.802642 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:34.802903 | orchestrator | 2025-04-01 19:33:34.802930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:34.802949 | orchestrator | Tuesday 01 April 2025 19:33:34 +0000 (0:00:00.244) 0:00:43.820 ********* 2025-04-01 19:33:35.035475 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:35.036332 | orchestrator | 2025-04-01 19:33:35.247260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:33:35.247340 | orchestrator | Tuesday 01 April 2025 19:33:35 +0000 (0:00:00.233) 0:00:44.054 ********* 2025-04-01 19:33:35.247363 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:35.248269 | orchestrator | 2025-04-01 19:33:35.249634 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-01 19:33:35.250567 | orchestrator | Tuesday 01 April 2025 19:33:35 +0000 (0:00:00.209) 0:00:44.263 ********* 2025-04-01 19:33:35.668427 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-01 19:33:35.669496 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-01 19:33:35.670470 | orchestrator | 2025-04-01 19:33:35.671105 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-01 19:33:35.672226 | orchestrator | Tuesday 01 April 2025 19:33:35 +0000 (0:00:00.420) 0:00:44.684 ********* 2025-04-01 19:33:35.835618 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:35.835882 | orchestrator | 2025-04-01 19:33:35.838959 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-01 19:33:35.839956 | orchestrator | Tuesday 01 April 2025 19:33:35 +0000 (0:00:00.169) 0:00:44.854 ********* 2025-04-01 19:33:35.991863 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:35.993064 | orchestrator | 2025-04-01 19:33:35.994102 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-01 19:33:35.995123 | orchestrator | Tuesday 01 April 2025 19:33:35 +0000 (0:00:00.155) 0:00:45.010 ********* 2025-04-01 19:33:36.145240 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:36.145430 | orchestrator | 2025-04-01 19:33:36.146359 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-01 19:33:36.147873 | orchestrator | Tuesday 01 April 2025 19:33:36 +0000 (0:00:00.149) 0:00:45.160 ********* 2025-04-01 19:33:36.313562 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:33:36.313765 | orchestrator | 2025-04-01 19:33:36.314334 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-01 19:33:36.314788 | orchestrator | Tuesday 01 April 2025 19:33:36 +0000 (0:00:00.171) 0:00:45.331 ********* 2025-04-01 19:33:36.513867 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959a80fb-1de6-50df-b35c-a247ba0dd9c7'}}) 2025-04-01 19:33:36.514259 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}}) 2025-04-01 19:33:36.517511 | orchestrator | 2025-04-01 19:33:36.518423 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-01 19:33:36.519046 | orchestrator | Tuesday 01 April 2025 19:33:36 +0000 (0:00:00.198) 0:00:45.530 ********* 2025-04-01 19:33:36.673019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959a80fb-1de6-50df-b35c-a247ba0dd9c7'}})  2025-04-01 19:33:36.674240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}})  2025-04-01 19:33:36.674274 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:36.675133 | orchestrator | 2025-04-01 19:33:36.675988 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-01 19:33:36.676924 | orchestrator | Tuesday 01 April 2025 19:33:36 +0000 (0:00:00.158) 0:00:45.689 ********* 2025-04-01 19:33:36.846850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959a80fb-1de6-50df-b35c-a247ba0dd9c7'}})  2025-04-01 19:33:36.847721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}})  2025-04-01 19:33:36.849419 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:36.850639 | orchestrator | 2025-04-01 19:33:36.852443 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-01 19:33:36.853665 | orchestrator | Tuesday 01 April 2025 19:33:36 +0000 (0:00:00.175) 0:00:45.864 ********* 2025-04-01 19:33:37.040028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959a80fb-1de6-50df-b35c-a247ba0dd9c7'}})  2025-04-01 19:33:37.042676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}})  2025-04-01 19:33:37.044931 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:37.045115 | orchestrator | 2025-04-01 19:33:37.045146 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-01 19:33:37.046134 | orchestrator | Tuesday 01 April 2025 19:33:37 +0000 (0:00:00.193) 0:00:46.058 ********* 2025-04-01 19:33:37.190364 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:33:37.191754 | orchestrator | 2025-04-01 19:33:37.194695 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-01 19:33:37.194795 | orchestrator | Tuesday 01 April 2025 19:33:37 +0000 (0:00:00.150) 0:00:46.208 ********* 2025-04-01 19:33:37.350423 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:33:37.350998 | orchestrator | 2025-04-01 19:33:37.352495 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-01 19:33:37.354999 | orchestrator | Tuesday 01 April 2025 19:33:37 +0000 (0:00:00.159) 0:00:46.368 ********* 2025-04-01 19:33:37.499835 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:37.501243 | orchestrator | 2025-04-01 19:33:37.501723 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-01 19:33:37.502557 | orchestrator | Tuesday 01 April 2025 19:33:37 +0000 (0:00:00.146) 0:00:46.514 ********* 2025-04-01 19:33:37.901982 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:37.902218 | orchestrator | 2025-04-01 19:33:37.902248 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-01 19:33:37.904431 | orchestrator | Tuesday 01 April 2025 19:33:37 +0000 (0:00:00.405) 0:00:46.919 ********* 2025-04-01 19:33:38.061350 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:38.061870 | orchestrator | 2025-04-01 19:33:38.062598 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-01 19:33:38.063368 | orchestrator | Tuesday 01 April 2025 19:33:38 +0000 (0:00:00.159) 0:00:47.079 ********* 2025-04-01 19:33:38.211964 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:33:38.214208 | orchestrator |  "ceph_osd_devices": { 2025-04-01 19:33:38.215212 | orchestrator |  "sdb": { 2025-04-01 19:33:38.215253 | orchestrator |  "osd_lvm_uuid": "959a80fb-1de6-50df-b35c-a247ba0dd9c7" 2025-04-01 19:33:38.216191 | orchestrator |  }, 2025-04-01 19:33:38.217947 | orchestrator |  "sdc": { 2025-04-01 19:33:38.218012 | orchestrator |  "osd_lvm_uuid": "cc43dffc-fbc4-5f6e-b48c-5e4474ee7050" 2025-04-01 19:33:38.218785 | orchestrator |  } 2025-04-01 19:33:38.219587 | orchestrator |  } 2025-04-01 19:33:38.219964 | orchestrator | } 2025-04-01 19:33:38.220534 | orchestrator | 2025-04-01 19:33:38.221121 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-01 19:33:38.221555 | orchestrator | Tuesday 01 April 2025 19:33:38 +0000 (0:00:00.151) 0:00:47.230 ********* 2025-04-01 19:33:38.370424 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:38.371383 | orchestrator | 2025-04-01 19:33:38.372468 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-01 19:33:38.375507 | orchestrator | Tuesday 01 April 2025 19:33:38 +0000 (0:00:00.157) 0:00:47.388 ********* 2025-04-01 19:33:38.522396 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:38.523982 | orchestrator | 2025-04-01 19:33:38.524882 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-01 19:33:38.527464 | orchestrator | Tuesday 01 April 2025 19:33:38 +0000 (0:00:00.150) 0:00:47.539 ********* 2025-04-01 19:33:38.671726 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:33:38.672944 | orchestrator | 2025-04-01 19:33:38.673786 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-01 19:33:38.674876 | orchestrator | Tuesday 01 April 2025 19:33:38 +0000 (0:00:00.151) 0:00:47.690 ********* 2025-04-01 19:33:39.021729 | orchestrator | changed: [testbed-node-5] => { 2025-04-01 19:33:39.022533 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-01 19:33:39.022576 | orchestrator |  "ceph_osd_devices": { 2025-04-01 19:33:39.022940 | orchestrator |  "sdb": { 2025-04-01 19:33:39.024230 | orchestrator |  "osd_lvm_uuid": "959a80fb-1de6-50df-b35c-a247ba0dd9c7" 2025-04-01 19:33:39.024911 | orchestrator |  }, 2025-04-01 19:33:39.025515 | orchestrator |  "sdc": { 2025-04-01 19:33:39.025792 | orchestrator |  "osd_lvm_uuid": "cc43dffc-fbc4-5f6e-b48c-5e4474ee7050" 2025-04-01 19:33:39.027095 | orchestrator |  } 2025-04-01 19:33:39.027739 | orchestrator |  }, 2025-04-01 19:33:39.028710 | orchestrator |  "lvm_volumes": [ 2025-04-01 19:33:39.029121 | orchestrator |  { 2025-04-01 19:33:39.030248 | orchestrator |  "data": "osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7", 2025-04-01 19:33:39.032384 | orchestrator |  "data_vg": "ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7" 2025-04-01 19:33:39.032888 | orchestrator |  }, 2025-04-01 19:33:39.033020 | orchestrator |  { 2025-04-01 19:33:39.033921 | orchestrator |  "data": "osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050", 2025-04-01 19:33:39.034146 | orchestrator |  "data_vg": "ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050" 2025-04-01 19:33:39.034602 | orchestrator |  } 2025-04-01 19:33:39.035282 | orchestrator |  ] 2025-04-01 19:33:39.035760 | orchestrator |  } 2025-04-01 19:33:39.035990 | orchestrator | } 2025-04-01 19:33:39.036701 | orchestrator | 2025-04-01 19:33:39.036996 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-01 19:33:39.037361 | orchestrator | Tuesday 01 April 2025 19:33:39 +0000 (0:00:00.346) 0:00:48.037 ********* 2025-04-01 19:33:40.427463 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-01 19:33:40.428772 | orchestrator | 2025-04-01 19:33:40.431146 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:33:40.433061 | orchestrator | 2025-04-01 19:33:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:33:40.434784 | orchestrator | 2025-04-01 19:33:40 | INFO  | Please wait and do not abort execution. 2025-04-01 19:33:40.434837 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-01 19:33:40.436245 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-01 19:33:40.437925 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-01 19:33:40.439036 | orchestrator | 2025-04-01 19:33:40.439986 | orchestrator | 2025-04-01 19:33:40.441623 | orchestrator | 2025-04-01 19:33:40.444197 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:33:40.444230 | orchestrator | Tuesday 01 April 2025 19:33:40 +0000 (0:00:01.407) 0:00:49.444 ********* 2025-04-01 19:33:40.444889 | orchestrator | =============================================================================== 2025-04-01 19:33:40.445691 | orchestrator | Write configuration file ------------------------------------------------ 5.41s 2025-04-01 19:33:40.445918 | orchestrator | Add known links to the list of available block devices ------------------ 1.51s 2025-04-01 19:33:40.446890 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2025-04-01 19:33:40.447664 | orchestrator | Get initial list of available block devices ----------------------------- 1.24s 2025-04-01 19:33:40.447711 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-04-01 19:33:40.447987 | orchestrator | Print configuration data ------------------------------------------------ 1.13s 2025-04-01 19:33:40.448483 | orchestrator | Add known links to the list of available block devices ------------------ 1.07s 2025-04-01 19:33:40.449082 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-04-01 19:33:40.450160 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.84s 2025-04-01 19:33:40.450656 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2025-04-01 19:33:40.451118 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-04-01 19:33:40.451767 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-04-01 19:33:40.452001 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.76s 2025-04-01 19:33:40.452475 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-04-01 19:33:40.453033 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-04-01 19:33:40.453349 | orchestrator | Set WAL devices config data --------------------------------------------- 0.70s 2025-04-01 19:33:40.453732 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-04-01 19:33:40.454261 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-04-01 19:33:40.454510 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.68s 2025-04-01 19:33:40.455073 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-04-01 19:33:52.815244 | orchestrator | 2025-04-01 19:33:52 | INFO  | Task 7642b188-56d9-4a04-9286-b7bf59a4cb0c is running in background. Output coming soon. 2025-04-01 19:34:21.979168 | orchestrator | 2025-04-01 19:34:12 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-04-01 19:34:23.894470 | orchestrator | 2025-04-01 19:34:12 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-04-01 19:34:23.894609 | orchestrator | 2025-04-01 19:34:12 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-04-01 19:34:23.894628 | orchestrator | 2025-04-01 19:34:12 | INFO  | Handling group overwrites in 99-overwrite 2025-04-01 19:34:23.894661 | orchestrator | 2025-04-01 19:34:12 | INFO  | Removing group ceph-mds from 50-ceph 2025-04-01 19:34:23.894692 | orchestrator | 2025-04-01 19:34:12 | INFO  | Removing group ceph-rgw from 50-ceph 2025-04-01 19:34:23.894707 | orchestrator | 2025-04-01 19:34:12 | INFO  | Removing group netbird:children from 50-infrastruture 2025-04-01 19:34:23.894722 | orchestrator | 2025-04-01 19:34:12 | INFO  | Removing group storage:children from 50-kolla 2025-04-01 19:34:23.894737 | orchestrator | 2025-04-01 19:34:12 | INFO  | Removing group frr:children from 60-generic 2025-04-01 19:34:23.894751 | orchestrator | 2025-04-01 19:34:12 | INFO  | Handling group overwrites in 20-roles 2025-04-01 19:34:23.894766 | orchestrator | 2025-04-01 19:34:12 | INFO  | Removing group k3s_node from 50-infrastruture 2025-04-01 19:34:23.894780 | orchestrator | 2025-04-01 19:34:13 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-04-01 19:34:23.894794 | orchestrator | 2025-04-01 19:34:21 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-04-01 19:34:23.894827 | orchestrator | 2025-04-01 19:34:23 | INFO  | Task 522582db-3354-45eb-abe1-a2f6dc2d58fc (ceph-create-lvm-devices) was prepared for execution. 2025-04-01 19:34:27.432518 | orchestrator | 2025-04-01 19:34:23 | INFO  | It takes a moment until task 522582db-3354-45eb-abe1-a2f6dc2d58fc (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-01 19:34:27.432679 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-01 19:34:28.069240 | orchestrator | 2025-04-01 19:34:28.070364 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-01 19:34:28.070985 | orchestrator | 2025-04-01 19:34:28.071965 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-01 19:34:28.072418 | orchestrator | Tuesday 01 April 2025 19:34:28 +0000 (0:00:00.543) 0:00:00.543 ********* 2025-04-01 19:34:28.321809 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-01 19:34:28.322121 | orchestrator | 2025-04-01 19:34:28.322873 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-01 19:34:28.323740 | orchestrator | Tuesday 01 April 2025 19:34:28 +0000 (0:00:00.254) 0:00:00.798 ********* 2025-04-01 19:34:28.612795 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:28.612951 | orchestrator | 2025-04-01 19:34:28.613711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:28.614146 | orchestrator | Tuesday 01 April 2025 19:34:28 +0000 (0:00:00.290) 0:00:01.089 ********* 2025-04-01 19:34:29.497900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-01 19:34:29.498468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-01 19:34:29.498510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-01 19:34:29.498535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-01 19:34:29.498894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-01 19:34:29.499250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-01 19:34:29.500672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-01 19:34:29.501593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-01 19:34:29.502066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-01 19:34:29.502856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-01 19:34:29.503357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-01 19:34:29.504372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-01 19:34:29.504592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-01 19:34:29.505287 | orchestrator | 2025-04-01 19:34:29.506113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:29.506740 | orchestrator | Tuesday 01 April 2025 19:34:29 +0000 (0:00:00.884) 0:00:01.973 ********* 2025-04-01 19:34:29.750965 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:29.752194 | orchestrator | 2025-04-01 19:34:29.753196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:29.753794 | orchestrator | Tuesday 01 April 2025 19:34:29 +0000 (0:00:00.254) 0:00:02.228 ********* 2025-04-01 19:34:29.964025 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:29.964216 | orchestrator | 2025-04-01 19:34:29.964656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:29.965441 | orchestrator | Tuesday 01 April 2025 19:34:29 +0000 (0:00:00.212) 0:00:02.440 ********* 2025-04-01 19:34:30.226205 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:30.226436 | orchestrator | 2025-04-01 19:34:30.227298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:30.230219 | orchestrator | Tuesday 01 April 2025 19:34:30 +0000 (0:00:00.260) 0:00:02.701 ********* 2025-04-01 19:34:30.466286 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:30.691756 | orchestrator | 2025-04-01 19:34:30.691811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:30.691828 | orchestrator | Tuesday 01 April 2025 19:34:30 +0000 (0:00:00.239) 0:00:02.941 ********* 2025-04-01 19:34:30.691853 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:30.691910 | orchestrator | 2025-04-01 19:34:30.692826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:30.693426 | orchestrator | Tuesday 01 April 2025 19:34:30 +0000 (0:00:00.221) 0:00:03.163 ********* 2025-04-01 19:34:30.930709 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:30.930861 | orchestrator | 2025-04-01 19:34:30.930892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:30.931395 | orchestrator | Tuesday 01 April 2025 19:34:30 +0000 (0:00:00.243) 0:00:03.406 ********* 2025-04-01 19:34:31.183824 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:31.184004 | orchestrator | 2025-04-01 19:34:31.184028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:31.184049 | orchestrator | Tuesday 01 April 2025 19:34:31 +0000 (0:00:00.252) 0:00:03.658 ********* 2025-04-01 19:34:31.418681 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:31.420547 | orchestrator | 2025-04-01 19:34:31.421296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:31.421988 | orchestrator | Tuesday 01 April 2025 19:34:31 +0000 (0:00:00.235) 0:00:03.894 ********* 2025-04-01 19:34:32.151556 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e) 2025-04-01 19:34:32.151781 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e) 2025-04-01 19:34:32.152262 | orchestrator | 2025-04-01 19:34:32.152491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:32.152523 | orchestrator | Tuesday 01 April 2025 19:34:32 +0000 (0:00:00.732) 0:00:04.626 ********* 2025-04-01 19:34:32.888271 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1) 2025-04-01 19:34:32.888524 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1) 2025-04-01 19:34:32.891432 | orchestrator | 2025-04-01 19:34:33.411132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:33.411288 | orchestrator | Tuesday 01 April 2025 19:34:32 +0000 (0:00:00.736) 0:00:05.363 ********* 2025-04-01 19:34:33.411370 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72) 2025-04-01 19:34:33.411777 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72) 2025-04-01 19:34:33.411816 | orchestrator | 2025-04-01 19:34:33.411889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:33.904389 | orchestrator | Tuesday 01 April 2025 19:34:33 +0000 (0:00:00.524) 0:00:05.888 ********* 2025-04-01 19:34:33.904583 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03) 2025-04-01 19:34:33.904681 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03) 2025-04-01 19:34:33.905231 | orchestrator | 2025-04-01 19:34:33.906390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:33.906838 | orchestrator | Tuesday 01 April 2025 19:34:33 +0000 (0:00:00.492) 0:00:06.380 ********* 2025-04-01 19:34:34.264491 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-01 19:34:34.264721 | orchestrator | 2025-04-01 19:34:34.799021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:34.799160 | orchestrator | Tuesday 01 April 2025 19:34:34 +0000 (0:00:00.358) 0:00:06.738 ********* 2025-04-01 19:34:34.799196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-01 19:34:34.799303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-01 19:34:34.799358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-01 19:34:34.799377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-01 19:34:34.800125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-01 19:34:34.800746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-01 19:34:34.801640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-01 19:34:34.802225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-01 19:34:34.802649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-01 19:34:34.803279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-01 19:34:34.804028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-01 19:34:34.804358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-01 19:34:34.804666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-01 19:34:34.805100 | orchestrator | 2025-04-01 19:34:34.805506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:34.805907 | orchestrator | Tuesday 01 April 2025 19:34:34 +0000 (0:00:00.531) 0:00:07.270 ********* 2025-04-01 19:34:35.013145 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:35.014621 | orchestrator | 2025-04-01 19:34:35.017481 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:35.258221 | orchestrator | Tuesday 01 April 2025 19:34:35 +0000 (0:00:00.217) 0:00:07.488 ********* 2025-04-01 19:34:35.258304 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:35.260505 | orchestrator | 2025-04-01 19:34:35.262914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:35.483778 | orchestrator | Tuesday 01 April 2025 19:34:35 +0000 (0:00:00.246) 0:00:07.734 ********* 2025-04-01 19:34:35.483832 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:35.484651 | orchestrator | 2025-04-01 19:34:35.489488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:35.716167 | orchestrator | Tuesday 01 April 2025 19:34:35 +0000 (0:00:00.225) 0:00:07.959 ********* 2025-04-01 19:34:35.716215 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:35.718532 | orchestrator | 2025-04-01 19:34:35.719270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:35.719298 | orchestrator | Tuesday 01 April 2025 19:34:35 +0000 (0:00:00.230) 0:00:08.190 ********* 2025-04-01 19:34:36.452044 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:36.452593 | orchestrator | 2025-04-01 19:34:36.452935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:36.453609 | orchestrator | Tuesday 01 April 2025 19:34:36 +0000 (0:00:00.738) 0:00:08.929 ********* 2025-04-01 19:34:36.685261 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:36.685893 | orchestrator | 2025-04-01 19:34:36.687273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:36.688946 | orchestrator | Tuesday 01 April 2025 19:34:36 +0000 (0:00:00.231) 0:00:09.160 ********* 2025-04-01 19:34:36.933262 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:36.934516 | orchestrator | 2025-04-01 19:34:36.935458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:36.936609 | orchestrator | Tuesday 01 April 2025 19:34:36 +0000 (0:00:00.249) 0:00:09.410 ********* 2025-04-01 19:34:37.143067 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:37.143457 | orchestrator | 2025-04-01 19:34:37.144778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:37.144943 | orchestrator | Tuesday 01 April 2025 19:34:37 +0000 (0:00:00.209) 0:00:09.619 ********* 2025-04-01 19:34:37.888649 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-01 19:34:37.891158 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-01 19:34:37.891643 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-01 19:34:37.892504 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-01 19:34:37.893194 | orchestrator | 2025-04-01 19:34:37.893478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:37.893909 | orchestrator | Tuesday 01 April 2025 19:34:37 +0000 (0:00:00.733) 0:00:10.352 ********* 2025-04-01 19:34:38.121128 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:38.121467 | orchestrator | 2025-04-01 19:34:38.124949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:38.333601 | orchestrator | Tuesday 01 April 2025 19:34:38 +0000 (0:00:00.243) 0:00:10.596 ********* 2025-04-01 19:34:38.333689 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:38.333740 | orchestrator | 2025-04-01 19:34:38.335841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:38.337051 | orchestrator | Tuesday 01 April 2025 19:34:38 +0000 (0:00:00.212) 0:00:10.808 ********* 2025-04-01 19:34:38.539943 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:38.541434 | orchestrator | 2025-04-01 19:34:38.541794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:34:38.542681 | orchestrator | Tuesday 01 April 2025 19:34:38 +0000 (0:00:00.208) 0:00:11.017 ********* 2025-04-01 19:34:38.770671 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:38.770920 | orchestrator | 2025-04-01 19:34:38.771853 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-01 19:34:38.772399 | orchestrator | Tuesday 01 April 2025 19:34:38 +0000 (0:00:00.229) 0:00:11.246 ********* 2025-04-01 19:34:38.969585 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:38.971504 | orchestrator | 2025-04-01 19:34:38.973264 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-01 19:34:38.974646 | orchestrator | Tuesday 01 April 2025 19:34:38 +0000 (0:00:00.198) 0:00:11.445 ********* 2025-04-01 19:34:39.456445 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'bdd573d7-384a-5f49-8a42-9b210b6d8834'}}) 2025-04-01 19:34:39.457184 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '988d16a2-b35c-5840-9d7c-a8265d6d87f9'}}) 2025-04-01 19:34:39.457216 | orchestrator | 2025-04-01 19:34:39.457937 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-01 19:34:39.458241 | orchestrator | Tuesday 01 April 2025 19:34:39 +0000 (0:00:00.487) 0:00:11.932 ********* 2025-04-01 19:34:41.197628 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'}) 2025-04-01 19:34:41.199355 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'}) 2025-04-01 19:34:41.199504 | orchestrator | 2025-04-01 19:34:41.201095 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-01 19:34:41.201412 | orchestrator | Tuesday 01 April 2025 19:34:41 +0000 (0:00:01.739) 0:00:13.671 ********* 2025-04-01 19:34:41.370646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:41.370804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:41.371533 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:41.372437 | orchestrator | 2025-04-01 19:34:41.374119 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-01 19:34:41.374158 | orchestrator | Tuesday 01 April 2025 19:34:41 +0000 (0:00:00.176) 0:00:13.847 ********* 2025-04-01 19:34:42.877410 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'}) 2025-04-01 19:34:42.877899 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'}) 2025-04-01 19:34:42.880796 | orchestrator | 2025-04-01 19:34:43.071674 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-01 19:34:43.071794 | orchestrator | Tuesday 01 April 2025 19:34:42 +0000 (0:00:01.504) 0:00:15.352 ********* 2025-04-01 19:34:43.071839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:43.225887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:43.225945 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:43.225963 | orchestrator | 2025-04-01 19:34:43.225979 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-01 19:34:43.225994 | orchestrator | Tuesday 01 April 2025 19:34:43 +0000 (0:00:00.193) 0:00:15.546 ********* 2025-04-01 19:34:43.226061 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:43.227163 | orchestrator | 2025-04-01 19:34:43.228306 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-01 19:34:43.230348 | orchestrator | Tuesday 01 April 2025 19:34:43 +0000 (0:00:00.156) 0:00:15.702 ********* 2025-04-01 19:34:43.403501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:43.404877 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:43.405508 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:43.406814 | orchestrator | 2025-04-01 19:34:43.407762 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-01 19:34:43.408058 | orchestrator | Tuesday 01 April 2025 19:34:43 +0000 (0:00:00.177) 0:00:15.879 ********* 2025-04-01 19:34:43.561592 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:43.563439 | orchestrator | 2025-04-01 19:34:43.755569 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-01 19:34:43.755610 | orchestrator | Tuesday 01 April 2025 19:34:43 +0000 (0:00:00.156) 0:00:16.035 ********* 2025-04-01 19:34:43.755633 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:43.756484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:43.756668 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:43.758453 | orchestrator | 2025-04-01 19:34:43.758485 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-01 19:34:43.758852 | orchestrator | Tuesday 01 April 2025 19:34:43 +0000 (0:00:00.196) 0:00:16.232 ********* 2025-04-01 19:34:44.108747 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:44.109298 | orchestrator | 2025-04-01 19:34:44.109770 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-01 19:34:44.110504 | orchestrator | Tuesday 01 April 2025 19:34:44 +0000 (0:00:00.353) 0:00:16.586 ********* 2025-04-01 19:34:44.294213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:44.295720 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:44.297431 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:44.299139 | orchestrator | 2025-04-01 19:34:44.299961 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-01 19:34:44.301052 | orchestrator | Tuesday 01 April 2025 19:34:44 +0000 (0:00:00.183) 0:00:16.769 ********* 2025-04-01 19:34:44.457246 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:44.457635 | orchestrator | 2025-04-01 19:34:44.458680 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-01 19:34:44.459853 | orchestrator | Tuesday 01 April 2025 19:34:44 +0000 (0:00:00.160) 0:00:16.930 ********* 2025-04-01 19:34:44.616563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:44.618084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:44.619440 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:44.621527 | orchestrator | 2025-04-01 19:34:44.621618 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-01 19:34:44.621642 | orchestrator | Tuesday 01 April 2025 19:34:44 +0000 (0:00:00.163) 0:00:17.094 ********* 2025-04-01 19:34:44.810556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:44.811751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:44.812368 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:44.813212 | orchestrator | 2025-04-01 19:34:44.814808 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-01 19:34:44.818184 | orchestrator | Tuesday 01 April 2025 19:34:44 +0000 (0:00:00.193) 0:00:17.287 ********* 2025-04-01 19:34:45.009276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:45.010806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:45.013635 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:45.014536 | orchestrator | 2025-04-01 19:34:45.014977 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-01 19:34:45.015520 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.197) 0:00:17.484 ********* 2025-04-01 19:34:45.186666 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:45.187694 | orchestrator | 2025-04-01 19:34:45.188987 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-01 19:34:45.189740 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.178) 0:00:17.663 ********* 2025-04-01 19:34:45.325109 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:45.325460 | orchestrator | 2025-04-01 19:34:45.327540 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-01 19:34:45.328454 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.135) 0:00:17.798 ********* 2025-04-01 19:34:45.455264 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:45.455561 | orchestrator | 2025-04-01 19:34:45.456986 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-01 19:34:45.457286 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.132) 0:00:17.931 ********* 2025-04-01 19:34:45.599808 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:34:45.601028 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-01 19:34:45.603125 | orchestrator | } 2025-04-01 19:34:45.603925 | orchestrator | 2025-04-01 19:34:45.605576 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-01 19:34:45.605873 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.145) 0:00:18.077 ********* 2025-04-01 19:34:45.761747 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:34:45.763225 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-01 19:34:45.764670 | orchestrator | } 2025-04-01 19:34:45.765585 | orchestrator | 2025-04-01 19:34:45.767042 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-01 19:34:45.767699 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.160) 0:00:18.237 ********* 2025-04-01 19:34:45.922438 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:34:45.923077 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-01 19:34:45.923995 | orchestrator | } 2025-04-01 19:34:45.925224 | orchestrator | 2025-04-01 19:34:45.927446 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-01 19:34:45.928415 | orchestrator | Tuesday 01 April 2025 19:34:45 +0000 (0:00:00.161) 0:00:18.399 ********* 2025-04-01 19:34:46.867831 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:46.868455 | orchestrator | 2025-04-01 19:34:46.869239 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-01 19:34:46.869672 | orchestrator | Tuesday 01 April 2025 19:34:46 +0000 (0:00:00.944) 0:00:19.344 ********* 2025-04-01 19:34:47.364278 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:47.364791 | orchestrator | 2025-04-01 19:34:47.365091 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-01 19:34:47.365901 | orchestrator | Tuesday 01 April 2025 19:34:47 +0000 (0:00:00.496) 0:00:19.840 ********* 2025-04-01 19:34:47.865458 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:47.866168 | orchestrator | 2025-04-01 19:34:47.866768 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-01 19:34:47.867420 | orchestrator | Tuesday 01 April 2025 19:34:47 +0000 (0:00:00.502) 0:00:20.343 ********* 2025-04-01 19:34:48.021729 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:48.022977 | orchestrator | 2025-04-01 19:34:48.025399 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-01 19:34:48.025698 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.156) 0:00:20.499 ********* 2025-04-01 19:34:48.140106 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:48.140579 | orchestrator | 2025-04-01 19:34:48.141996 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-01 19:34:48.143355 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.117) 0:00:20.617 ********* 2025-04-01 19:34:48.289030 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:48.290137 | orchestrator | 2025-04-01 19:34:48.291450 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-01 19:34:48.292425 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.147) 0:00:20.764 ********* 2025-04-01 19:34:48.468915 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:34:48.471540 | orchestrator |  "vgs_report": { 2025-04-01 19:34:48.473447 | orchestrator |  "vg": [] 2025-04-01 19:34:48.473654 | orchestrator |  } 2025-04-01 19:34:48.474356 | orchestrator | } 2025-04-01 19:34:48.474693 | orchestrator | 2025-04-01 19:34:48.475004 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-01 19:34:48.475029 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.179) 0:00:20.944 ********* 2025-04-01 19:34:48.606551 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:48.606851 | orchestrator | 2025-04-01 19:34:48.607934 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-01 19:34:48.608717 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.137) 0:00:21.082 ********* 2025-04-01 19:34:48.767160 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:48.767538 | orchestrator | 2025-04-01 19:34:48.767565 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-01 19:34:48.768408 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.161) 0:00:21.243 ********* 2025-04-01 19:34:48.908480 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:48.908604 | orchestrator | 2025-04-01 19:34:48.908790 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-01 19:34:48.909214 | orchestrator | Tuesday 01 April 2025 19:34:48 +0000 (0:00:00.142) 0:00:21.386 ********* 2025-04-01 19:34:49.043881 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:49.044674 | orchestrator | 2025-04-01 19:34:49.046114 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-01 19:34:49.046140 | orchestrator | Tuesday 01 April 2025 19:34:49 +0000 (0:00:00.133) 0:00:21.519 ********* 2025-04-01 19:34:49.414120 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:49.415108 | orchestrator | 2025-04-01 19:34:49.416741 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-01 19:34:49.418491 | orchestrator | Tuesday 01 April 2025 19:34:49 +0000 (0:00:00.370) 0:00:21.889 ********* 2025-04-01 19:34:49.604097 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:49.604899 | orchestrator | 2025-04-01 19:34:49.605477 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-01 19:34:49.605989 | orchestrator | Tuesday 01 April 2025 19:34:49 +0000 (0:00:00.191) 0:00:22.081 ********* 2025-04-01 19:34:49.773805 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:49.773979 | orchestrator | 2025-04-01 19:34:49.774931 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-01 19:34:49.776302 | orchestrator | Tuesday 01 April 2025 19:34:49 +0000 (0:00:00.168) 0:00:22.249 ********* 2025-04-01 19:34:49.934242 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:49.935823 | orchestrator | 2025-04-01 19:34:49.937067 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-01 19:34:49.937898 | orchestrator | Tuesday 01 April 2025 19:34:49 +0000 (0:00:00.161) 0:00:22.411 ********* 2025-04-01 19:34:50.067458 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.068004 | orchestrator | 2025-04-01 19:34:50.069304 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-01 19:34:50.072142 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.132) 0:00:22.544 ********* 2025-04-01 19:34:50.200298 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.201398 | orchestrator | 2025-04-01 19:34:50.201426 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-01 19:34:50.202252 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.132) 0:00:22.677 ********* 2025-04-01 19:34:50.371780 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.374362 | orchestrator | 2025-04-01 19:34:50.374962 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-01 19:34:50.375990 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.167) 0:00:22.844 ********* 2025-04-01 19:34:50.524367 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.524955 | orchestrator | 2025-04-01 19:34:50.526387 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-01 19:34:50.527137 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.156) 0:00:23.001 ********* 2025-04-01 19:34:50.655875 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.656029 | orchestrator | 2025-04-01 19:34:50.656522 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-01 19:34:50.658771 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.131) 0:00:23.133 ********* 2025-04-01 19:34:50.800811 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.800953 | orchestrator | 2025-04-01 19:34:50.802111 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-01 19:34:50.802872 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.144) 0:00:23.277 ********* 2025-04-01 19:34:50.977352 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:50.977623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:50.977685 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:50.980444 | orchestrator | 2025-04-01 19:34:50.981453 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-01 19:34:50.983838 | orchestrator | Tuesday 01 April 2025 19:34:50 +0000 (0:00:00.173) 0:00:23.451 ********* 2025-04-01 19:34:51.396401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:51.397161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:51.397404 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:51.397674 | orchestrator | 2025-04-01 19:34:51.398131 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-01 19:34:51.398584 | orchestrator | Tuesday 01 April 2025 19:34:51 +0000 (0:00:00.421) 0:00:23.872 ********* 2025-04-01 19:34:51.577090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:51.579381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:51.579425 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:51.581526 | orchestrator | 2025-04-01 19:34:51.581835 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-01 19:34:51.581864 | orchestrator | Tuesday 01 April 2025 19:34:51 +0000 (0:00:00.180) 0:00:24.053 ********* 2025-04-01 19:34:51.752145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:51.753540 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:51.753578 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:51.754849 | orchestrator | 2025-04-01 19:34:51.756598 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-01 19:34:51.937283 | orchestrator | Tuesday 01 April 2025 19:34:51 +0000 (0:00:00.174) 0:00:24.227 ********* 2025-04-01 19:34:51.937399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:51.938607 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:51.938641 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:51.940145 | orchestrator | 2025-04-01 19:34:51.940177 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-01 19:34:51.941722 | orchestrator | Tuesday 01 April 2025 19:34:51 +0000 (0:00:00.185) 0:00:24.413 ********* 2025-04-01 19:34:52.105718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:52.106892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:52.108163 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:52.108915 | orchestrator | 2025-04-01 19:34:52.109102 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-01 19:34:52.110301 | orchestrator | Tuesday 01 April 2025 19:34:52 +0000 (0:00:00.166) 0:00:24.580 ********* 2025-04-01 19:34:52.303012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:52.303224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:52.303638 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:52.304413 | orchestrator | 2025-04-01 19:34:52.305117 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-01 19:34:52.305228 | orchestrator | Tuesday 01 April 2025 19:34:52 +0000 (0:00:00.199) 0:00:24.779 ********* 2025-04-01 19:34:52.537749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:52.538486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:52.539128 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:52.539871 | orchestrator | 2025-04-01 19:34:52.541166 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-01 19:34:52.544444 | orchestrator | Tuesday 01 April 2025 19:34:52 +0000 (0:00:00.233) 0:00:25.013 ********* 2025-04-01 19:34:53.058601 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:53.059485 | orchestrator | 2025-04-01 19:34:53.059526 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-01 19:34:53.059837 | orchestrator | Tuesday 01 April 2025 19:34:53 +0000 (0:00:00.522) 0:00:25.535 ********* 2025-04-01 19:34:53.556410 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:53.556592 | orchestrator | 2025-04-01 19:34:53.557454 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-01 19:34:53.557576 | orchestrator | Tuesday 01 April 2025 19:34:53 +0000 (0:00:00.495) 0:00:26.030 ********* 2025-04-01 19:34:53.712671 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:34:53.714264 | orchestrator | 2025-04-01 19:34:53.715415 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-01 19:34:53.716293 | orchestrator | Tuesday 01 April 2025 19:34:53 +0000 (0:00:00.157) 0:00:26.188 ********* 2025-04-01 19:34:53.908500 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'vg_name': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'}) 2025-04-01 19:34:53.913840 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'vg_name': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'}) 2025-04-01 19:34:53.913873 | orchestrator | 2025-04-01 19:34:54.323517 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-01 19:34:54.323626 | orchestrator | Tuesday 01 April 2025 19:34:53 +0000 (0:00:00.192) 0:00:26.381 ********* 2025-04-01 19:34:54.323660 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:54.323731 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:54.324188 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:54.325217 | orchestrator | 2025-04-01 19:34:54.325441 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-01 19:34:54.325884 | orchestrator | Tuesday 01 April 2025 19:34:54 +0000 (0:00:00.418) 0:00:26.799 ********* 2025-04-01 19:34:54.524461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:54.525233 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:54.526091 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:54.526128 | orchestrator | 2025-04-01 19:34:54.526999 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-01 19:34:54.527454 | orchestrator | Tuesday 01 April 2025 19:34:54 +0000 (0:00:00.199) 0:00:26.998 ********* 2025-04-01 19:34:54.720769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'})  2025-04-01 19:34:54.721614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'})  2025-04-01 19:34:54.722678 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:34:54.723509 | orchestrator | 2025-04-01 19:34:54.724517 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-01 19:34:54.725388 | orchestrator | Tuesday 01 April 2025 19:34:54 +0000 (0:00:00.198) 0:00:27.197 ********* 2025-04-01 19:34:55.491904 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:34:55.492104 | orchestrator |  "lvm_report": { 2025-04-01 19:34:55.492568 | orchestrator |  "lv": [ 2025-04-01 19:34:55.493056 | orchestrator |  { 2025-04-01 19:34:55.494530 | orchestrator |  "lv_name": "osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9", 2025-04-01 19:34:55.495124 | orchestrator |  "vg_name": "ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9" 2025-04-01 19:34:55.495624 | orchestrator |  }, 2025-04-01 19:34:55.496054 | orchestrator |  { 2025-04-01 19:34:55.496946 | orchestrator |  "lv_name": "osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834", 2025-04-01 19:34:55.497531 | orchestrator |  "vg_name": "ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834" 2025-04-01 19:34:55.498448 | orchestrator |  } 2025-04-01 19:34:55.499037 | orchestrator |  ], 2025-04-01 19:34:55.499492 | orchestrator |  "pv": [ 2025-04-01 19:34:55.499517 | orchestrator |  { 2025-04-01 19:34:55.499578 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-01 19:34:55.499596 | orchestrator |  "vg_name": "ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834" 2025-04-01 19:34:55.499630 | orchestrator |  }, 2025-04-01 19:34:55.500075 | orchestrator |  { 2025-04-01 19:34:55.500130 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-01 19:34:55.500398 | orchestrator |  "vg_name": "ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9" 2025-04-01 19:34:55.500736 | orchestrator |  } 2025-04-01 19:34:55.500893 | orchestrator |  ] 2025-04-01 19:34:55.501078 | orchestrator |  } 2025-04-01 19:34:55.501943 | orchestrator | } 2025-04-01 19:34:55.502264 | orchestrator | 2025-04-01 19:34:55.502280 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-01 19:34:55.502292 | orchestrator | 2025-04-01 19:34:55.502561 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-01 19:34:55.502599 | orchestrator | Tuesday 01 April 2025 19:34:55 +0000 (0:00:00.769) 0:00:27.966 ********* 2025-04-01 19:34:56.179975 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-01 19:34:56.180231 | orchestrator | 2025-04-01 19:34:56.181430 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-01 19:34:56.182138 | orchestrator | Tuesday 01 April 2025 19:34:56 +0000 (0:00:00.689) 0:00:28.656 ********* 2025-04-01 19:34:56.464906 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:34:56.468547 | orchestrator | 2025-04-01 19:34:56.471371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:56.991676 | orchestrator | Tuesday 01 April 2025 19:34:56 +0000 (0:00:00.285) 0:00:28.941 ********* 2025-04-01 19:34:56.991772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-01 19:34:56.993456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-01 19:34:56.994781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-01 19:34:56.995832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-01 19:34:56.997212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-01 19:34:56.998054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-01 19:34:56.999542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-01 19:34:57.000255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-01 19:34:57.000788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-01 19:34:57.003447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-01 19:34:57.003793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-01 19:34:57.004670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-01 19:34:57.005256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-01 19:34:57.005729 | orchestrator | 2025-04-01 19:34:57.006163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:57.006733 | orchestrator | Tuesday 01 April 2025 19:34:56 +0000 (0:00:00.525) 0:00:29.466 ********* 2025-04-01 19:34:57.212205 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:57.212342 | orchestrator | 2025-04-01 19:34:57.213054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:57.214775 | orchestrator | Tuesday 01 April 2025 19:34:57 +0000 (0:00:00.222) 0:00:29.688 ********* 2025-04-01 19:34:57.428552 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:57.430561 | orchestrator | 2025-04-01 19:34:57.431854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:57.431955 | orchestrator | Tuesday 01 April 2025 19:34:57 +0000 (0:00:00.216) 0:00:29.905 ********* 2025-04-01 19:34:57.665412 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:57.666156 | orchestrator | 2025-04-01 19:34:57.669962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:57.670110 | orchestrator | Tuesday 01 April 2025 19:34:57 +0000 (0:00:00.234) 0:00:30.140 ********* 2025-04-01 19:34:57.877499 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:57.878861 | orchestrator | 2025-04-01 19:34:57.878890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:57.880924 | orchestrator | Tuesday 01 April 2025 19:34:57 +0000 (0:00:00.212) 0:00:30.353 ********* 2025-04-01 19:34:58.109465 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:58.109557 | orchestrator | 2025-04-01 19:34:58.109575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:58.111056 | orchestrator | Tuesday 01 April 2025 19:34:58 +0000 (0:00:00.231) 0:00:30.584 ********* 2025-04-01 19:34:58.326577 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:58.327261 | orchestrator | 2025-04-01 19:34:58.329922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:58.333997 | orchestrator | Tuesday 01 April 2025 19:34:58 +0000 (0:00:00.219) 0:00:30.803 ********* 2025-04-01 19:34:59.025751 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:59.025906 | orchestrator | 2025-04-01 19:34:59.026721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:59.027254 | orchestrator | Tuesday 01 April 2025 19:34:59 +0000 (0:00:00.697) 0:00:31.501 ********* 2025-04-01 19:34:59.273397 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:34:59.273920 | orchestrator | 2025-04-01 19:34:59.273958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:59.274242 | orchestrator | Tuesday 01 April 2025 19:34:59 +0000 (0:00:00.248) 0:00:31.750 ********* 2025-04-01 19:34:59.811584 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891) 2025-04-01 19:34:59.812099 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891) 2025-04-01 19:34:59.812163 | orchestrator | 2025-04-01 19:34:59.812561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:34:59.813134 | orchestrator | Tuesday 01 April 2025 19:34:59 +0000 (0:00:00.536) 0:00:32.286 ********* 2025-04-01 19:35:00.332066 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c) 2025-04-01 19:35:00.332488 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c) 2025-04-01 19:35:00.333291 | orchestrator | 2025-04-01 19:35:00.333362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:00.333678 | orchestrator | Tuesday 01 April 2025 19:35:00 +0000 (0:00:00.521) 0:00:32.808 ********* 2025-04-01 19:35:00.808753 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c) 2025-04-01 19:35:00.808909 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c) 2025-04-01 19:35:00.809121 | orchestrator | 2025-04-01 19:35:00.809843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:00.810193 | orchestrator | Tuesday 01 April 2025 19:35:00 +0000 (0:00:00.476) 0:00:33.284 ********* 2025-04-01 19:35:01.312702 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905) 2025-04-01 19:35:01.313447 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905) 2025-04-01 19:35:01.313493 | orchestrator | 2025-04-01 19:35:01.313561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:01.313975 | orchestrator | Tuesday 01 April 2025 19:35:01 +0000 (0:00:00.504) 0:00:33.789 ********* 2025-04-01 19:35:01.703256 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-01 19:35:01.703711 | orchestrator | 2025-04-01 19:35:01.704639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:01.705253 | orchestrator | Tuesday 01 April 2025 19:35:01 +0000 (0:00:00.390) 0:00:34.179 ********* 2025-04-01 19:35:02.207642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-01 19:35:02.208096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-01 19:35:02.208815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-01 19:35:02.209460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-01 19:35:02.211907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-01 19:35:02.212157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-01 19:35:02.212191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-01 19:35:02.212206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-01 19:35:02.212225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-01 19:35:02.212649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-01 19:35:02.213468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-01 19:35:02.213961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-01 19:35:02.214381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-01 19:35:02.214974 | orchestrator | 2025-04-01 19:35:02.215169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:02.215552 | orchestrator | Tuesday 01 April 2025 19:35:02 +0000 (0:00:00.503) 0:00:34.682 ********* 2025-04-01 19:35:02.455132 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:02.455504 | orchestrator | 2025-04-01 19:35:02.456197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:02.456758 | orchestrator | Tuesday 01 April 2025 19:35:02 +0000 (0:00:00.249) 0:00:34.932 ********* 2025-04-01 19:35:03.143143 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:03.143276 | orchestrator | 2025-04-01 19:35:03.143301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:03.352243 | orchestrator | Tuesday 01 April 2025 19:35:03 +0000 (0:00:00.687) 0:00:35.619 ********* 2025-04-01 19:35:03.352287 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:03.352670 | orchestrator | 2025-04-01 19:35:03.352784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:03.353187 | orchestrator | Tuesday 01 April 2025 19:35:03 +0000 (0:00:00.208) 0:00:35.828 ********* 2025-04-01 19:35:03.598305 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:03.598494 | orchestrator | 2025-04-01 19:35:03.599633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:03.600992 | orchestrator | Tuesday 01 April 2025 19:35:03 +0000 (0:00:00.245) 0:00:36.073 ********* 2025-04-01 19:35:03.833242 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:03.834402 | orchestrator | 2025-04-01 19:35:03.835128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:03.835531 | orchestrator | Tuesday 01 April 2025 19:35:03 +0000 (0:00:00.234) 0:00:36.308 ********* 2025-04-01 19:35:04.046906 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:04.047036 | orchestrator | 2025-04-01 19:35:04.048055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:04.048758 | orchestrator | Tuesday 01 April 2025 19:35:04 +0000 (0:00:00.215) 0:00:36.523 ********* 2025-04-01 19:35:04.258534 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:04.259914 | orchestrator | 2025-04-01 19:35:04.260520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:04.260910 | orchestrator | Tuesday 01 April 2025 19:35:04 +0000 (0:00:00.211) 0:00:36.735 ********* 2025-04-01 19:35:04.473504 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:04.474100 | orchestrator | 2025-04-01 19:35:04.474837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:04.475802 | orchestrator | Tuesday 01 April 2025 19:35:04 +0000 (0:00:00.214) 0:00:36.949 ********* 2025-04-01 19:35:05.450244 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-01 19:35:05.451759 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-01 19:35:05.453082 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-01 19:35:05.454819 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-01 19:35:05.456082 | orchestrator | 2025-04-01 19:35:05.456804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:05.456836 | orchestrator | Tuesday 01 April 2025 19:35:05 +0000 (0:00:00.974) 0:00:37.924 ********* 2025-04-01 19:35:05.694803 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:05.695543 | orchestrator | 2025-04-01 19:35:05.697134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:05.699858 | orchestrator | Tuesday 01 April 2025 19:35:05 +0000 (0:00:00.245) 0:00:38.169 ********* 2025-04-01 19:35:05.913552 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:05.916409 | orchestrator | 2025-04-01 19:35:05.917494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:05.917980 | orchestrator | Tuesday 01 April 2025 19:35:05 +0000 (0:00:00.221) 0:00:38.391 ********* 2025-04-01 19:35:06.635850 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:06.636569 | orchestrator | 2025-04-01 19:35:06.638011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:06.638912 | orchestrator | Tuesday 01 April 2025 19:35:06 +0000 (0:00:00.721) 0:00:39.112 ********* 2025-04-01 19:35:06.859025 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:06.859475 | orchestrator | 2025-04-01 19:35:06.859536 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-01 19:35:06.860545 | orchestrator | Tuesday 01 April 2025 19:35:06 +0000 (0:00:00.221) 0:00:39.333 ********* 2025-04-01 19:35:07.003873 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:07.004026 | orchestrator | 2025-04-01 19:35:07.005038 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-01 19:35:07.005807 | orchestrator | Tuesday 01 April 2025 19:35:06 +0000 (0:00:00.147) 0:00:39.481 ********* 2025-04-01 19:35:07.219443 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52229b2b-1fb5-50ba-ad18-deadbd92af76'}}) 2025-04-01 19:35:07.220751 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b9675d24-a7d4-5c32-a36a-48aa524d4563'}}) 2025-04-01 19:35:07.221814 | orchestrator | 2025-04-01 19:35:07.222284 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-01 19:35:07.222837 | orchestrator | Tuesday 01 April 2025 19:35:07 +0000 (0:00:00.216) 0:00:39.697 ********* 2025-04-01 19:35:09.231804 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'}) 2025-04-01 19:35:09.477480 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'}) 2025-04-01 19:35:10.656952 | orchestrator | 2025-04-01 19:35:10.657066 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-01 19:35:10.657086 | orchestrator | Tuesday 01 April 2025 19:35:09 +0000 (0:00:02.008) 0:00:41.706 ********* 2025-04-01 19:35:10.657103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:10.657119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:10.657133 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:10.657148 | orchestrator | 2025-04-01 19:35:10.657162 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-01 19:35:10.657176 | orchestrator | Tuesday 01 April 2025 19:35:09 +0000 (0:00:00.210) 0:00:41.916 ********* 2025-04-01 19:35:10.657207 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'}) 2025-04-01 19:35:10.657779 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'}) 2025-04-01 19:35:10.658407 | orchestrator | 2025-04-01 19:35:10.661145 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-01 19:35:10.841062 | orchestrator | Tuesday 01 April 2025 19:35:10 +0000 (0:00:01.215) 0:00:43.131 ********* 2025-04-01 19:35:10.841102 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:10.842002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:10.844039 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:10.845364 | orchestrator | 2025-04-01 19:35:10.846728 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-01 19:35:10.847471 | orchestrator | Tuesday 01 April 2025 19:35:10 +0000 (0:00:00.182) 0:00:43.314 ********* 2025-04-01 19:35:10.999571 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:11.000701 | orchestrator | 2025-04-01 19:35:11.001497 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-01 19:35:11.002056 | orchestrator | Tuesday 01 April 2025 19:35:10 +0000 (0:00:00.161) 0:00:43.476 ********* 2025-04-01 19:35:11.421842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:11.423433 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:11.423546 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:11.424875 | orchestrator | 2025-04-01 19:35:11.425207 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-01 19:35:11.426146 | orchestrator | Tuesday 01 April 2025 19:35:11 +0000 (0:00:00.420) 0:00:43.896 ********* 2025-04-01 19:35:11.587652 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:11.588468 | orchestrator | 2025-04-01 19:35:11.588496 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-01 19:35:11.588551 | orchestrator | Tuesday 01 April 2025 19:35:11 +0000 (0:00:00.168) 0:00:44.065 ********* 2025-04-01 19:35:11.794660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:11.794805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:11.795058 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:11.795468 | orchestrator | 2025-04-01 19:35:11.795751 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-01 19:35:11.796246 | orchestrator | Tuesday 01 April 2025 19:35:11 +0000 (0:00:00.206) 0:00:44.271 ********* 2025-04-01 19:35:11.975088 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:11.975663 | orchestrator | 2025-04-01 19:35:11.975694 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-01 19:35:11.976199 | orchestrator | Tuesday 01 April 2025 19:35:11 +0000 (0:00:00.181) 0:00:44.452 ********* 2025-04-01 19:35:12.180941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:12.181889 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:12.182992 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:12.184171 | orchestrator | 2025-04-01 19:35:12.185485 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-01 19:35:12.187632 | orchestrator | Tuesday 01 April 2025 19:35:12 +0000 (0:00:00.204) 0:00:44.657 ********* 2025-04-01 19:35:12.331698 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:12.332155 | orchestrator | 2025-04-01 19:35:12.333123 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-01 19:35:12.335912 | orchestrator | Tuesday 01 April 2025 19:35:12 +0000 (0:00:00.150) 0:00:44.807 ********* 2025-04-01 19:35:12.540766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:12.540936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:12.541841 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:12.543176 | orchestrator | 2025-04-01 19:35:12.544806 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-01 19:35:12.545661 | orchestrator | Tuesday 01 April 2025 19:35:12 +0000 (0:00:00.209) 0:00:45.017 ********* 2025-04-01 19:35:12.733775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:12.733923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:12.733976 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:12.733998 | orchestrator | 2025-04-01 19:35:12.734152 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-01 19:35:12.916501 | orchestrator | Tuesday 01 April 2025 19:35:12 +0000 (0:00:00.191) 0:00:45.208 ********* 2025-04-01 19:35:12.916609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:12.918720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:12.918939 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:12.918964 | orchestrator | 2025-04-01 19:35:12.918983 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-01 19:35:12.919424 | orchestrator | Tuesday 01 April 2025 19:35:12 +0000 (0:00:00.184) 0:00:45.393 ********* 2025-04-01 19:35:13.075964 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:13.076793 | orchestrator | 2025-04-01 19:35:13.077921 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-01 19:35:13.080724 | orchestrator | Tuesday 01 April 2025 19:35:13 +0000 (0:00:00.158) 0:00:45.552 ********* 2025-04-01 19:35:13.234343 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:13.235830 | orchestrator | 2025-04-01 19:35:13.236672 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-01 19:35:13.237301 | orchestrator | Tuesday 01 April 2025 19:35:13 +0000 (0:00:00.158) 0:00:45.710 ********* 2025-04-01 19:35:13.618653 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:13.619198 | orchestrator | 2025-04-01 19:35:13.619782 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-01 19:35:13.620570 | orchestrator | Tuesday 01 April 2025 19:35:13 +0000 (0:00:00.383) 0:00:46.093 ********* 2025-04-01 19:35:13.782287 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:35:13.782504 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-01 19:35:13.782533 | orchestrator | } 2025-04-01 19:35:13.784264 | orchestrator | 2025-04-01 19:35:13.784654 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-01 19:35:13.784935 | orchestrator | Tuesday 01 April 2025 19:35:13 +0000 (0:00:00.166) 0:00:46.260 ********* 2025-04-01 19:35:13.967639 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:35:13.969976 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-01 19:35:13.970409 | orchestrator | } 2025-04-01 19:35:13.971058 | orchestrator | 2025-04-01 19:35:13.971689 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-01 19:35:13.974126 | orchestrator | Tuesday 01 April 2025 19:35:13 +0000 (0:00:00.183) 0:00:46.443 ********* 2025-04-01 19:35:14.126572 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:35:14.127429 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-01 19:35:14.127921 | orchestrator | } 2025-04-01 19:35:14.127950 | orchestrator | 2025-04-01 19:35:14.129054 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-01 19:35:14.129771 | orchestrator | Tuesday 01 April 2025 19:35:14 +0000 (0:00:00.160) 0:00:46.604 ********* 2025-04-01 19:35:14.713145 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:14.715508 | orchestrator | 2025-04-01 19:35:14.716074 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-01 19:35:14.716102 | orchestrator | Tuesday 01 April 2025 19:35:14 +0000 (0:00:00.582) 0:00:47.187 ********* 2025-04-01 19:35:15.281991 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:15.282693 | orchestrator | 2025-04-01 19:35:15.283309 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-01 19:35:15.284472 | orchestrator | Tuesday 01 April 2025 19:35:15 +0000 (0:00:00.570) 0:00:47.758 ********* 2025-04-01 19:35:15.818930 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:15.819556 | orchestrator | 2025-04-01 19:35:15.820368 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-01 19:35:15.821000 | orchestrator | Tuesday 01 April 2025 19:35:15 +0000 (0:00:00.536) 0:00:48.294 ********* 2025-04-01 19:35:15.966186 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:15.966810 | orchestrator | 2025-04-01 19:35:15.969243 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-01 19:35:15.969762 | orchestrator | Tuesday 01 April 2025 19:35:15 +0000 (0:00:00.148) 0:00:48.443 ********* 2025-04-01 19:35:16.091940 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:16.093389 | orchestrator | 2025-04-01 19:35:16.094927 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-01 19:35:16.096011 | orchestrator | Tuesday 01 April 2025 19:35:16 +0000 (0:00:00.124) 0:00:48.568 ********* 2025-04-01 19:35:16.206495 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:16.207840 | orchestrator | 2025-04-01 19:35:16.208779 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-01 19:35:16.210441 | orchestrator | Tuesday 01 April 2025 19:35:16 +0000 (0:00:00.114) 0:00:48.682 ********* 2025-04-01 19:35:16.345510 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:35:16.345776 | orchestrator |  "vgs_report": { 2025-04-01 19:35:16.346171 | orchestrator |  "vg": [] 2025-04-01 19:35:16.347496 | orchestrator |  } 2025-04-01 19:35:16.348427 | orchestrator | } 2025-04-01 19:35:16.351054 | orchestrator | 2025-04-01 19:35:16.729609 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-01 19:35:16.729732 | orchestrator | Tuesday 01 April 2025 19:35:16 +0000 (0:00:00.140) 0:00:48.822 ********* 2025-04-01 19:35:16.729768 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:16.730910 | orchestrator | 2025-04-01 19:35:16.731276 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-01 19:35:16.732451 | orchestrator | Tuesday 01 April 2025 19:35:16 +0000 (0:00:00.383) 0:00:49.206 ********* 2025-04-01 19:35:16.902120 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:16.903714 | orchestrator | 2025-04-01 19:35:16.904302 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-01 19:35:16.906091 | orchestrator | Tuesday 01 April 2025 19:35:16 +0000 (0:00:00.171) 0:00:49.378 ********* 2025-04-01 19:35:17.049791 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:17.050237 | orchestrator | 2025-04-01 19:35:17.050577 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-01 19:35:17.051201 | orchestrator | Tuesday 01 April 2025 19:35:17 +0000 (0:00:00.148) 0:00:49.527 ********* 2025-04-01 19:35:17.213060 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:17.213538 | orchestrator | 2025-04-01 19:35:17.214137 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-01 19:35:17.214908 | orchestrator | Tuesday 01 April 2025 19:35:17 +0000 (0:00:00.162) 0:00:49.690 ********* 2025-04-01 19:35:17.390924 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:17.391929 | orchestrator | 2025-04-01 19:35:17.392648 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-01 19:35:17.394261 | orchestrator | Tuesday 01 April 2025 19:35:17 +0000 (0:00:00.174) 0:00:49.864 ********* 2025-04-01 19:35:17.546168 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:17.546512 | orchestrator | 2025-04-01 19:35:17.547834 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-01 19:35:17.549608 | orchestrator | Tuesday 01 April 2025 19:35:17 +0000 (0:00:00.157) 0:00:50.022 ********* 2025-04-01 19:35:17.725815 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:17.726272 | orchestrator | 2025-04-01 19:35:17.727131 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-01 19:35:17.727828 | orchestrator | Tuesday 01 April 2025 19:35:17 +0000 (0:00:00.180) 0:00:50.203 ********* 2025-04-01 19:35:17.898906 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:17.899154 | orchestrator | 2025-04-01 19:35:17.900354 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-01 19:35:17.900985 | orchestrator | Tuesday 01 April 2025 19:35:17 +0000 (0:00:00.171) 0:00:50.375 ********* 2025-04-01 19:35:18.077435 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:18.077583 | orchestrator | 2025-04-01 19:35:18.078115 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-01 19:35:18.078694 | orchestrator | Tuesday 01 April 2025 19:35:18 +0000 (0:00:00.179) 0:00:50.554 ********* 2025-04-01 19:35:18.234553 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:18.235345 | orchestrator | 2025-04-01 19:35:18.236700 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-01 19:35:18.237612 | orchestrator | Tuesday 01 April 2025 19:35:18 +0000 (0:00:00.156) 0:00:50.711 ********* 2025-04-01 19:35:18.384074 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:18.386271 | orchestrator | 2025-04-01 19:35:18.388968 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-01 19:35:18.389751 | orchestrator | Tuesday 01 April 2025 19:35:18 +0000 (0:00:00.147) 0:00:50.858 ********* 2025-04-01 19:35:18.553061 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:18.554553 | orchestrator | 2025-04-01 19:35:18.555042 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-01 19:35:18.556157 | orchestrator | Tuesday 01 April 2025 19:35:18 +0000 (0:00:00.170) 0:00:51.029 ********* 2025-04-01 19:35:18.929642 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:18.931041 | orchestrator | 2025-04-01 19:35:18.931487 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-01 19:35:18.932472 | orchestrator | Tuesday 01 April 2025 19:35:18 +0000 (0:00:00.376) 0:00:51.406 ********* 2025-04-01 19:35:19.084473 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:19.084573 | orchestrator | 2025-04-01 19:35:19.084841 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-01 19:35:19.085286 | orchestrator | Tuesday 01 April 2025 19:35:19 +0000 (0:00:00.154) 0:00:51.561 ********* 2025-04-01 19:35:19.275228 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:19.276008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:19.277458 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:19.278171 | orchestrator | 2025-04-01 19:35:19.279034 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-01 19:35:19.280356 | orchestrator | Tuesday 01 April 2025 19:35:19 +0000 (0:00:00.189) 0:00:51.751 ********* 2025-04-01 19:35:19.462809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:19.463586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:19.464294 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:19.465440 | orchestrator | 2025-04-01 19:35:19.466267 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-01 19:35:19.466943 | orchestrator | Tuesday 01 April 2025 19:35:19 +0000 (0:00:00.185) 0:00:51.937 ********* 2025-04-01 19:35:19.632752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:19.633296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:19.633665 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:19.634282 | orchestrator | 2025-04-01 19:35:19.634971 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-01 19:35:19.635732 | orchestrator | Tuesday 01 April 2025 19:35:19 +0000 (0:00:00.172) 0:00:52.109 ********* 2025-04-01 19:35:19.807249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:19.808537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:19.809942 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:19.810536 | orchestrator | 2025-04-01 19:35:19.810942 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-01 19:35:19.811867 | orchestrator | Tuesday 01 April 2025 19:35:19 +0000 (0:00:00.173) 0:00:52.283 ********* 2025-04-01 19:35:19.993827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:19.994823 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:19.995501 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:19.996539 | orchestrator | 2025-04-01 19:35:19.996985 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-01 19:35:19.997823 | orchestrator | Tuesday 01 April 2025 19:35:19 +0000 (0:00:00.187) 0:00:52.470 ********* 2025-04-01 19:35:20.229665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:20.230747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:20.230824 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:20.230918 | orchestrator | 2025-04-01 19:35:20.231718 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-01 19:35:20.232386 | orchestrator | Tuesday 01 April 2025 19:35:20 +0000 (0:00:00.233) 0:00:52.704 ********* 2025-04-01 19:35:20.406737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:20.407512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:20.408064 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:20.409137 | orchestrator | 2025-04-01 19:35:20.409760 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-01 19:35:20.410209 | orchestrator | Tuesday 01 April 2025 19:35:20 +0000 (0:00:00.179) 0:00:52.883 ********* 2025-04-01 19:35:20.601744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:20.602076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:20.603236 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:20.605870 | orchestrator | 2025-04-01 19:35:20.607125 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-01 19:35:20.608502 | orchestrator | Tuesday 01 April 2025 19:35:20 +0000 (0:00:00.193) 0:00:53.077 ********* 2025-04-01 19:35:21.168413 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:21.168578 | orchestrator | 2025-04-01 19:35:21.169372 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-01 19:35:21.170413 | orchestrator | Tuesday 01 April 2025 19:35:21 +0000 (0:00:00.566) 0:00:53.644 ********* 2025-04-01 19:35:21.977973 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:21.984932 | orchestrator | 2025-04-01 19:35:21.990417 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-01 19:35:21.991006 | orchestrator | Tuesday 01 April 2025 19:35:21 +0000 (0:00:00.801) 0:00:54.445 ********* 2025-04-01 19:35:22.131001 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:22.131407 | orchestrator | 2025-04-01 19:35:22.131785 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-01 19:35:22.132602 | orchestrator | Tuesday 01 April 2025 19:35:22 +0000 (0:00:00.162) 0:00:54.608 ********* 2025-04-01 19:35:22.315038 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'vg_name': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'}) 2025-04-01 19:35:22.315857 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'vg_name': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'}) 2025-04-01 19:35:22.316631 | orchestrator | 2025-04-01 19:35:22.317833 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-01 19:35:22.318950 | orchestrator | Tuesday 01 April 2025 19:35:22 +0000 (0:00:00.183) 0:00:54.791 ********* 2025-04-01 19:35:22.526158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:22.526579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:22.527977 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:22.529450 | orchestrator | 2025-04-01 19:35:22.531156 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-01 19:35:22.531976 | orchestrator | Tuesday 01 April 2025 19:35:22 +0000 (0:00:00.207) 0:00:54.999 ********* 2025-04-01 19:35:22.715127 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:22.715925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:22.718093 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:22.718885 | orchestrator | 2025-04-01 19:35:22.719970 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-01 19:35:22.720407 | orchestrator | Tuesday 01 April 2025 19:35:22 +0000 (0:00:00.191) 0:00:55.191 ********* 2025-04-01 19:35:22.908488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'})  2025-04-01 19:35:22.908626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'})  2025-04-01 19:35:22.909160 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:35:22.909745 | orchestrator | 2025-04-01 19:35:22.910299 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-01 19:35:22.911332 | orchestrator | Tuesday 01 April 2025 19:35:22 +0000 (0:00:00.194) 0:00:55.385 ********* 2025-04-01 19:35:23.896606 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:35:23.896751 | orchestrator |  "lvm_report": { 2025-04-01 19:35:23.897950 | orchestrator |  "lv": [ 2025-04-01 19:35:23.899816 | orchestrator |  { 2025-04-01 19:35:23.900000 | orchestrator |  "lv_name": "osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76", 2025-04-01 19:35:23.900933 | orchestrator |  "vg_name": "ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76" 2025-04-01 19:35:23.901558 | orchestrator |  }, 2025-04-01 19:35:23.902330 | orchestrator |  { 2025-04-01 19:35:23.902751 | orchestrator |  "lv_name": "osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563", 2025-04-01 19:35:23.903562 | orchestrator |  "vg_name": "ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563" 2025-04-01 19:35:23.904829 | orchestrator |  } 2025-04-01 19:35:23.905613 | orchestrator |  ], 2025-04-01 19:35:23.906261 | orchestrator |  "pv": [ 2025-04-01 19:35:23.906990 | orchestrator |  { 2025-04-01 19:35:23.907296 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-01 19:35:23.908361 | orchestrator |  "vg_name": "ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76" 2025-04-01 19:35:23.908902 | orchestrator |  }, 2025-04-01 19:35:23.909370 | orchestrator |  { 2025-04-01 19:35:23.909976 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-01 19:35:23.910648 | orchestrator |  "vg_name": "ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563" 2025-04-01 19:35:23.911185 | orchestrator |  } 2025-04-01 19:35:23.911859 | orchestrator |  ] 2025-04-01 19:35:23.912131 | orchestrator |  } 2025-04-01 19:35:23.912887 | orchestrator | } 2025-04-01 19:35:23.913250 | orchestrator | 2025-04-01 19:35:23.913981 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-01 19:35:23.914632 | orchestrator | 2025-04-01 19:35:23.915403 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-01 19:35:23.916387 | orchestrator | Tuesday 01 April 2025 19:35:23 +0000 (0:00:00.986) 0:00:56.371 ********* 2025-04-01 19:35:24.146561 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-01 19:35:24.147228 | orchestrator | 2025-04-01 19:35:24.147804 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-01 19:35:24.151002 | orchestrator | Tuesday 01 April 2025 19:35:24 +0000 (0:00:00.250) 0:00:56.622 ********* 2025-04-01 19:35:24.478306 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:24.478787 | orchestrator | 2025-04-01 19:35:24.479388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:24.482161 | orchestrator | Tuesday 01 April 2025 19:35:24 +0000 (0:00:00.332) 0:00:56.955 ********* 2025-04-01 19:35:25.000141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-01 19:35:25.000505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-01 19:35:25.002704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-01 19:35:25.003718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-01 19:35:25.004601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-01 19:35:25.005551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-01 19:35:25.006385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-01 19:35:25.006862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-01 19:35:25.007622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-01 19:35:25.008274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-01 19:35:25.009195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-01 19:35:25.010582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-01 19:35:25.010617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-01 19:35:25.011594 | orchestrator | 2025-04-01 19:35:25.012420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:25.012641 | orchestrator | Tuesday 01 April 2025 19:35:24 +0000 (0:00:00.518) 0:00:57.474 ********* 2025-04-01 19:35:25.201636 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:25.202648 | orchestrator | 2025-04-01 19:35:25.203102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:25.203825 | orchestrator | Tuesday 01 April 2025 19:35:25 +0000 (0:00:00.204) 0:00:57.678 ********* 2025-04-01 19:35:25.483115 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:25.484410 | orchestrator | 2025-04-01 19:35:25.487251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:25.684564 | orchestrator | Tuesday 01 April 2025 19:35:25 +0000 (0:00:00.280) 0:00:57.958 ********* 2025-04-01 19:35:25.684639 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:25.688053 | orchestrator | 2025-04-01 19:35:25.688599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:25.688630 | orchestrator | Tuesday 01 April 2025 19:35:25 +0000 (0:00:00.199) 0:00:58.158 ********* 2025-04-01 19:35:26.286509 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:26.286722 | orchestrator | 2025-04-01 19:35:26.287183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:26.288075 | orchestrator | Tuesday 01 April 2025 19:35:26 +0000 (0:00:00.604) 0:00:58.763 ********* 2025-04-01 19:35:26.510650 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:26.511225 | orchestrator | 2025-04-01 19:35:26.512420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:26.514986 | orchestrator | Tuesday 01 April 2025 19:35:26 +0000 (0:00:00.222) 0:00:58.986 ********* 2025-04-01 19:35:26.725066 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:26.725481 | orchestrator | 2025-04-01 19:35:26.726179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:26.726966 | orchestrator | Tuesday 01 April 2025 19:35:26 +0000 (0:00:00.213) 0:00:59.200 ********* 2025-04-01 19:35:26.948343 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:26.949096 | orchestrator | 2025-04-01 19:35:26.950290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:26.950866 | orchestrator | Tuesday 01 April 2025 19:35:26 +0000 (0:00:00.224) 0:00:59.424 ********* 2025-04-01 19:35:27.199505 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:27.200014 | orchestrator | 2025-04-01 19:35:27.200300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:27.200359 | orchestrator | Tuesday 01 April 2025 19:35:27 +0000 (0:00:00.248) 0:00:59.674 ********* 2025-04-01 19:35:27.641937 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c) 2025-04-01 19:35:27.642958 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c) 2025-04-01 19:35:27.644086 | orchestrator | 2025-04-01 19:35:27.644859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:27.647924 | orchestrator | Tuesday 01 April 2025 19:35:27 +0000 (0:00:00.442) 0:01:00.117 ********* 2025-04-01 19:35:28.311456 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7) 2025-04-01 19:35:28.313040 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7) 2025-04-01 19:35:28.313073 | orchestrator | 2025-04-01 19:35:28.313601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:28.315908 | orchestrator | Tuesday 01 April 2025 19:35:28 +0000 (0:00:00.670) 0:01:00.788 ********* 2025-04-01 19:35:28.861950 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c) 2025-04-01 19:35:28.863985 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c) 2025-04-01 19:35:28.866142 | orchestrator | 2025-04-01 19:35:28.866180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:29.697956 | orchestrator | Tuesday 01 April 2025 19:35:28 +0000 (0:00:00.548) 0:01:01.336 ********* 2025-04-01 19:35:29.698138 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8) 2025-04-01 19:35:29.698216 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8) 2025-04-01 19:35:29.698239 | orchestrator | 2025-04-01 19:35:29.698576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-01 19:35:29.699025 | orchestrator | Tuesday 01 April 2025 19:35:29 +0000 (0:00:00.835) 0:01:02.172 ********* 2025-04-01 19:35:30.547186 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-01 19:35:30.548110 | orchestrator | 2025-04-01 19:35:30.549411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:30.549496 | orchestrator | Tuesday 01 April 2025 19:35:30 +0000 (0:00:00.852) 0:01:03.025 ********* 2025-04-01 19:35:31.170870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-01 19:35:31.171082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-01 19:35:31.171985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-01 19:35:31.173303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-01 19:35:31.174078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-01 19:35:31.175006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-01 19:35:31.175679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-01 19:35:31.176700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-01 19:35:31.180129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-01 19:35:31.181061 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-01 19:35:31.181090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-01 19:35:31.181106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-01 19:35:31.181122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-01 19:35:31.181141 | orchestrator | 2025-04-01 19:35:31.181594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:31.182201 | orchestrator | Tuesday 01 April 2025 19:35:31 +0000 (0:00:00.621) 0:01:03.646 ********* 2025-04-01 19:35:31.394862 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:31.395347 | orchestrator | 2025-04-01 19:35:31.396758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:31.397282 | orchestrator | Tuesday 01 April 2025 19:35:31 +0000 (0:00:00.225) 0:01:03.872 ********* 2025-04-01 19:35:31.635075 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:31.635480 | orchestrator | 2025-04-01 19:35:31.637459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:31.638784 | orchestrator | Tuesday 01 April 2025 19:35:31 +0000 (0:00:00.238) 0:01:04.111 ********* 2025-04-01 19:35:31.890925 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:31.891651 | orchestrator | 2025-04-01 19:35:31.892503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:31.895405 | orchestrator | Tuesday 01 April 2025 19:35:31 +0000 (0:00:00.254) 0:01:04.365 ********* 2025-04-01 19:35:32.108260 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:32.108604 | orchestrator | 2025-04-01 19:35:32.109981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:32.323977 | orchestrator | Tuesday 01 April 2025 19:35:32 +0000 (0:00:00.219) 0:01:04.584 ********* 2025-04-01 19:35:32.324096 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:32.324171 | orchestrator | 2025-04-01 19:35:32.324531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:32.325423 | orchestrator | Tuesday 01 April 2025 19:35:32 +0000 (0:00:00.215) 0:01:04.800 ********* 2025-04-01 19:35:32.584968 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:32.585552 | orchestrator | 2025-04-01 19:35:32.586774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:32.587383 | orchestrator | Tuesday 01 April 2025 19:35:32 +0000 (0:00:00.262) 0:01:05.062 ********* 2025-04-01 19:35:32.821836 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:32.822748 | orchestrator | 2025-04-01 19:35:32.824748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:32.825156 | orchestrator | Tuesday 01 April 2025 19:35:32 +0000 (0:00:00.233) 0:01:05.296 ********* 2025-04-01 19:35:33.042906 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:33.043236 | orchestrator | 2025-04-01 19:35:33.044331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:33.044907 | orchestrator | Tuesday 01 April 2025 19:35:33 +0000 (0:00:00.222) 0:01:05.519 ********* 2025-04-01 19:35:34.281854 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-01 19:35:34.282476 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-01 19:35:34.282850 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-01 19:35:34.283049 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-01 19:35:34.283933 | orchestrator | 2025-04-01 19:35:34.284427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:34.286158 | orchestrator | Tuesday 01 April 2025 19:35:34 +0000 (0:00:01.238) 0:01:06.757 ********* 2025-04-01 19:35:34.496945 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:34.497090 | orchestrator | 2025-04-01 19:35:34.497792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:34.500028 | orchestrator | Tuesday 01 April 2025 19:35:34 +0000 (0:00:00.216) 0:01:06.973 ********* 2025-04-01 19:35:34.711650 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:34.713482 | orchestrator | 2025-04-01 19:35:34.713701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:34.715836 | orchestrator | Tuesday 01 April 2025 19:35:34 +0000 (0:00:00.214) 0:01:07.188 ********* 2025-04-01 19:35:34.911020 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:34.911250 | orchestrator | 2025-04-01 19:35:34.912330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-01 19:35:34.912815 | orchestrator | Tuesday 01 April 2025 19:35:34 +0000 (0:00:00.199) 0:01:07.387 ********* 2025-04-01 19:35:35.142714 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:35.142875 | orchestrator | 2025-04-01 19:35:35.143928 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-01 19:35:35.144441 | orchestrator | Tuesday 01 April 2025 19:35:35 +0000 (0:00:00.230) 0:01:07.618 ********* 2025-04-01 19:35:35.289943 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:35.290774 | orchestrator | 2025-04-01 19:35:35.292468 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-01 19:35:35.294094 | orchestrator | Tuesday 01 April 2025 19:35:35 +0000 (0:00:00.148) 0:01:07.766 ********* 2025-04-01 19:35:35.520119 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '959a80fb-1de6-50df-b35c-a247ba0dd9c7'}}) 2025-04-01 19:35:35.520491 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}}) 2025-04-01 19:35:35.521275 | orchestrator | 2025-04-01 19:35:35.522144 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-01 19:35:35.522741 | orchestrator | Tuesday 01 April 2025 19:35:35 +0000 (0:00:00.230) 0:01:07.997 ********* 2025-04-01 19:35:37.294970 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'}) 2025-04-01 19:35:37.295140 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}) 2025-04-01 19:35:37.296494 | orchestrator | 2025-04-01 19:35:37.296610 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-01 19:35:37.298243 | orchestrator | Tuesday 01 April 2025 19:35:37 +0000 (0:00:01.773) 0:01:09.770 ********* 2025-04-01 19:35:37.477432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:37.478224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:37.478838 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:37.480186 | orchestrator | 2025-04-01 19:35:37.480970 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-01 19:35:37.481452 | orchestrator | Tuesday 01 April 2025 19:35:37 +0000 (0:00:00.184) 0:01:09.954 ********* 2025-04-01 19:35:38.899487 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'}) 2025-04-01 19:35:38.901981 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}) 2025-04-01 19:35:38.903609 | orchestrator | 2025-04-01 19:35:38.905062 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-01 19:35:38.905994 | orchestrator | Tuesday 01 April 2025 19:35:38 +0000 (0:00:01.420) 0:01:11.375 ********* 2025-04-01 19:35:39.093516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:39.095304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:39.097121 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:39.098232 | orchestrator | 2025-04-01 19:35:39.099007 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-01 19:35:39.099601 | orchestrator | Tuesday 01 April 2025 19:35:39 +0000 (0:00:00.193) 0:01:11.569 ********* 2025-04-01 19:35:39.249772 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:39.250923 | orchestrator | 2025-04-01 19:35:39.252088 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-01 19:35:39.252862 | orchestrator | Tuesday 01 April 2025 19:35:39 +0000 (0:00:00.156) 0:01:11.726 ********* 2025-04-01 19:35:39.443726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:39.444406 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:39.444902 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:39.445933 | orchestrator | 2025-04-01 19:35:39.446056 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-01 19:35:39.447471 | orchestrator | Tuesday 01 April 2025 19:35:39 +0000 (0:00:00.193) 0:01:11.919 ********* 2025-04-01 19:35:39.589054 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:39.590182 | orchestrator | 2025-04-01 19:35:39.590942 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-01 19:35:39.592751 | orchestrator | Tuesday 01 April 2025 19:35:39 +0000 (0:00:00.146) 0:01:12.065 ********* 2025-04-01 19:35:39.777121 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:39.777265 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:39.778362 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:39.779196 | orchestrator | 2025-04-01 19:35:39.779791 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-01 19:35:39.780365 | orchestrator | Tuesday 01 April 2025 19:35:39 +0000 (0:00:00.188) 0:01:12.254 ********* 2025-04-01 19:35:39.980113 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:39.982236 | orchestrator | 2025-04-01 19:35:39.983106 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-01 19:35:39.984678 | orchestrator | Tuesday 01 April 2025 19:35:39 +0000 (0:00:00.201) 0:01:12.455 ********* 2025-04-01 19:35:40.183368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:40.183554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:40.183883 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:40.183914 | orchestrator | 2025-04-01 19:35:40.184528 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-01 19:35:40.335403 | orchestrator | Tuesday 01 April 2025 19:35:40 +0000 (0:00:00.202) 0:01:12.658 ********* 2025-04-01 19:35:40.335488 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:40.336582 | orchestrator | 2025-04-01 19:35:40.519989 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-01 19:35:40.520059 | orchestrator | Tuesday 01 April 2025 19:35:40 +0000 (0:00:00.152) 0:01:12.811 ********* 2025-04-01 19:35:40.520086 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:40.521043 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:40.521776 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:40.522744 | orchestrator | 2025-04-01 19:35:40.524884 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-01 19:35:40.707692 | orchestrator | Tuesday 01 April 2025 19:35:40 +0000 (0:00:00.185) 0:01:12.996 ********* 2025-04-01 19:35:40.707749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:40.708871 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:40.709425 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:40.712307 | orchestrator | 2025-04-01 19:35:41.142675 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-01 19:35:41.142768 | orchestrator | Tuesday 01 April 2025 19:35:40 +0000 (0:00:00.186) 0:01:13.183 ********* 2025-04-01 19:35:41.142802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:41.143238 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:41.143272 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:41.144340 | orchestrator | 2025-04-01 19:35:41.145258 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-01 19:35:41.148145 | orchestrator | Tuesday 01 April 2025 19:35:41 +0000 (0:00:00.434) 0:01:13.618 ********* 2025-04-01 19:35:41.290950 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:41.291308 | orchestrator | 2025-04-01 19:35:41.292344 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-01 19:35:41.292966 | orchestrator | Tuesday 01 April 2025 19:35:41 +0000 (0:00:00.149) 0:01:13.768 ********* 2025-04-01 19:35:41.440123 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:41.441288 | orchestrator | 2025-04-01 19:35:41.441365 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-01 19:35:41.442101 | orchestrator | Tuesday 01 April 2025 19:35:41 +0000 (0:00:00.145) 0:01:13.913 ********* 2025-04-01 19:35:41.596451 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:41.596728 | orchestrator | 2025-04-01 19:35:41.597600 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-01 19:35:41.598151 | orchestrator | Tuesday 01 April 2025 19:35:41 +0000 (0:00:00.160) 0:01:14.073 ********* 2025-04-01 19:35:41.752824 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:35:41.754177 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-01 19:35:41.758107 | orchestrator | } 2025-04-01 19:35:41.759290 | orchestrator | 2025-04-01 19:35:41.760971 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-01 19:35:41.761680 | orchestrator | Tuesday 01 April 2025 19:35:41 +0000 (0:00:00.155) 0:01:14.228 ********* 2025-04-01 19:35:41.912889 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:35:41.913712 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-01 19:35:41.913992 | orchestrator | } 2025-04-01 19:35:41.914808 | orchestrator | 2025-04-01 19:35:41.915918 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-01 19:35:41.920431 | orchestrator | Tuesday 01 April 2025 19:35:41 +0000 (0:00:00.161) 0:01:14.390 ********* 2025-04-01 19:35:42.115629 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:35:42.116366 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-01 19:35:42.117558 | orchestrator | } 2025-04-01 19:35:42.117799 | orchestrator | 2025-04-01 19:35:42.119167 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-01 19:35:42.119560 | orchestrator | Tuesday 01 April 2025 19:35:42 +0000 (0:00:00.201) 0:01:14.591 ********* 2025-04-01 19:35:42.641135 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:42.642082 | orchestrator | 2025-04-01 19:35:42.642107 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-01 19:35:42.642351 | orchestrator | Tuesday 01 April 2025 19:35:42 +0000 (0:00:00.523) 0:01:15.115 ********* 2025-04-01 19:35:43.145800 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:43.145957 | orchestrator | 2025-04-01 19:35:43.146679 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-01 19:35:43.147194 | orchestrator | Tuesday 01 April 2025 19:35:43 +0000 (0:00:00.507) 0:01:15.622 ********* 2025-04-01 19:35:43.666538 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:43.666687 | orchestrator | 2025-04-01 19:35:43.667278 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-01 19:35:43.667420 | orchestrator | Tuesday 01 April 2025 19:35:43 +0000 (0:00:00.517) 0:01:16.140 ********* 2025-04-01 19:35:43.850778 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:43.850954 | orchestrator | 2025-04-01 19:35:43.851366 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-01 19:35:43.851969 | orchestrator | Tuesday 01 April 2025 19:35:43 +0000 (0:00:00.186) 0:01:16.327 ********* 2025-04-01 19:35:44.199478 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:44.199654 | orchestrator | 2025-04-01 19:35:44.200162 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-01 19:35:44.201028 | orchestrator | Tuesday 01 April 2025 19:35:44 +0000 (0:00:00.349) 0:01:16.676 ********* 2025-04-01 19:35:44.328432 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:44.330163 | orchestrator | 2025-04-01 19:35:44.331015 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-01 19:35:44.333716 | orchestrator | Tuesday 01 April 2025 19:35:44 +0000 (0:00:00.125) 0:01:16.802 ********* 2025-04-01 19:35:44.485483 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:35:44.487948 | orchestrator |  "vgs_report": { 2025-04-01 19:35:44.489489 | orchestrator |  "vg": [] 2025-04-01 19:35:44.491061 | orchestrator |  } 2025-04-01 19:35:44.492245 | orchestrator | } 2025-04-01 19:35:44.493129 | orchestrator | 2025-04-01 19:35:44.493557 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-01 19:35:44.494413 | orchestrator | Tuesday 01 April 2025 19:35:44 +0000 (0:00:00.157) 0:01:16.959 ********* 2025-04-01 19:35:44.651626 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:44.652398 | orchestrator | 2025-04-01 19:35:44.653532 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-01 19:35:44.654982 | orchestrator | Tuesday 01 April 2025 19:35:44 +0000 (0:00:00.168) 0:01:17.128 ********* 2025-04-01 19:35:44.822070 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:44.823483 | orchestrator | 2025-04-01 19:35:44.824441 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-01 19:35:44.825735 | orchestrator | Tuesday 01 April 2025 19:35:44 +0000 (0:00:00.168) 0:01:17.296 ********* 2025-04-01 19:35:44.956066 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:44.957507 | orchestrator | 2025-04-01 19:35:44.957894 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-01 19:35:44.958900 | orchestrator | Tuesday 01 April 2025 19:35:44 +0000 (0:00:00.136) 0:01:17.432 ********* 2025-04-01 19:35:45.113686 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:45.114484 | orchestrator | 2025-04-01 19:35:45.115906 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-01 19:35:45.116460 | orchestrator | Tuesday 01 April 2025 19:35:45 +0000 (0:00:00.158) 0:01:17.590 ********* 2025-04-01 19:35:45.282072 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:45.282197 | orchestrator | 2025-04-01 19:35:45.283531 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-01 19:35:45.284145 | orchestrator | Tuesday 01 April 2025 19:35:45 +0000 (0:00:00.167) 0:01:17.758 ********* 2025-04-01 19:35:45.430135 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:45.430298 | orchestrator | 2025-04-01 19:35:45.432190 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-01 19:35:45.435803 | orchestrator | Tuesday 01 April 2025 19:35:45 +0000 (0:00:00.146) 0:01:17.904 ********* 2025-04-01 19:35:45.588591 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:45.589357 | orchestrator | 2025-04-01 19:35:45.590691 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-01 19:35:45.591895 | orchestrator | Tuesday 01 April 2025 19:35:45 +0000 (0:00:00.159) 0:01:18.064 ********* 2025-04-01 19:35:45.754596 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:45.754959 | orchestrator | 2025-04-01 19:35:45.756045 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-01 19:35:45.757520 | orchestrator | Tuesday 01 April 2025 19:35:45 +0000 (0:00:00.165) 0:01:18.230 ********* 2025-04-01 19:35:45.910206 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:45.910703 | orchestrator | 2025-04-01 19:35:45.910805 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-01 19:35:45.911846 | orchestrator | Tuesday 01 April 2025 19:35:45 +0000 (0:00:00.157) 0:01:18.387 ********* 2025-04-01 19:35:46.305356 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:46.305912 | orchestrator | 2025-04-01 19:35:46.307075 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-01 19:35:46.307902 | orchestrator | Tuesday 01 April 2025 19:35:46 +0000 (0:00:00.394) 0:01:18.781 ********* 2025-04-01 19:35:46.460906 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:46.462495 | orchestrator | 2025-04-01 19:35:46.464403 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-01 19:35:46.465224 | orchestrator | Tuesday 01 April 2025 19:35:46 +0000 (0:00:00.154) 0:01:18.936 ********* 2025-04-01 19:35:46.611055 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:46.612386 | orchestrator | 2025-04-01 19:35:46.614096 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-01 19:35:46.615090 | orchestrator | Tuesday 01 April 2025 19:35:46 +0000 (0:00:00.152) 0:01:19.088 ********* 2025-04-01 19:35:46.765276 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:46.766148 | orchestrator | 2025-04-01 19:35:46.767472 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-01 19:35:46.768410 | orchestrator | Tuesday 01 April 2025 19:35:46 +0000 (0:00:00.151) 0:01:19.239 ********* 2025-04-01 19:35:46.938650 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:46.939577 | orchestrator | 2025-04-01 19:35:46.940420 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-01 19:35:46.941964 | orchestrator | Tuesday 01 April 2025 19:35:46 +0000 (0:00:00.175) 0:01:19.414 ********* 2025-04-01 19:35:47.131914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:47.132020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:47.132040 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:47.132056 | orchestrator | 2025-04-01 19:35:47.132075 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-01 19:35:47.132388 | orchestrator | Tuesday 01 April 2025 19:35:47 +0000 (0:00:00.192) 0:01:19.607 ********* 2025-04-01 19:35:47.306245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:47.306932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:47.308080 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:47.310543 | orchestrator | 2025-04-01 19:35:47.504272 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-01 19:35:47.504358 | orchestrator | Tuesday 01 April 2025 19:35:47 +0000 (0:00:00.175) 0:01:19.782 ********* 2025-04-01 19:35:47.504381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:47.507522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:47.507812 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:47.507842 | orchestrator | 2025-04-01 19:35:47.509173 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-01 19:35:47.510259 | orchestrator | Tuesday 01 April 2025 19:35:47 +0000 (0:00:00.196) 0:01:19.979 ********* 2025-04-01 19:35:47.686853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:47.688033 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:47.689378 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:47.690498 | orchestrator | 2025-04-01 19:35:47.691714 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-01 19:35:47.692395 | orchestrator | Tuesday 01 April 2025 19:35:47 +0000 (0:00:00.184) 0:01:20.163 ********* 2025-04-01 19:35:47.927070 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:47.928234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:47.929537 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:47.931123 | orchestrator | 2025-04-01 19:35:47.931307 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-01 19:35:47.932457 | orchestrator | Tuesday 01 April 2025 19:35:47 +0000 (0:00:00.239) 0:01:20.403 ********* 2025-04-01 19:35:48.109701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:48.111441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:48.112794 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:48.113599 | orchestrator | 2025-04-01 19:35:48.114622 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-01 19:35:48.115665 | orchestrator | Tuesday 01 April 2025 19:35:48 +0000 (0:00:00.182) 0:01:20.585 ********* 2025-04-01 19:35:48.542198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:48.542676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:48.544224 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:48.545684 | orchestrator | 2025-04-01 19:35:48.546678 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-01 19:35:48.548029 | orchestrator | Tuesday 01 April 2025 19:35:48 +0000 (0:00:00.431) 0:01:21.017 ********* 2025-04-01 19:35:48.727490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:48.728462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:48.728501 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:48.730349 | orchestrator | 2025-04-01 19:35:48.733128 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-01 19:35:49.221365 | orchestrator | Tuesday 01 April 2025 19:35:48 +0000 (0:00:00.185) 0:01:21.203 ********* 2025-04-01 19:35:49.221476 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:49.222273 | orchestrator | 2025-04-01 19:35:49.222304 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-01 19:35:49.222356 | orchestrator | Tuesday 01 April 2025 19:35:49 +0000 (0:00:00.494) 0:01:21.697 ********* 2025-04-01 19:35:49.736745 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:49.737162 | orchestrator | 2025-04-01 19:35:49.739284 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-01 19:35:49.742162 | orchestrator | Tuesday 01 April 2025 19:35:49 +0000 (0:00:00.514) 0:01:22.211 ********* 2025-04-01 19:35:49.894244 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:49.894752 | orchestrator | 2025-04-01 19:35:49.895850 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-01 19:35:49.897061 | orchestrator | Tuesday 01 April 2025 19:35:49 +0000 (0:00:00.159) 0:01:22.371 ********* 2025-04-01 19:35:50.092404 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'vg_name': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'}) 2025-04-01 19:35:50.092820 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'vg_name': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}) 2025-04-01 19:35:50.093788 | orchestrator | 2025-04-01 19:35:50.094306 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-01 19:35:50.095171 | orchestrator | Tuesday 01 April 2025 19:35:50 +0000 (0:00:00.196) 0:01:22.568 ********* 2025-04-01 19:35:50.289018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:50.289689 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:50.290203 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:50.290811 | orchestrator | 2025-04-01 19:35:50.291175 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-01 19:35:50.292030 | orchestrator | Tuesday 01 April 2025 19:35:50 +0000 (0:00:00.197) 0:01:22.766 ********* 2025-04-01 19:35:50.490278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:50.671627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:50.671727 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:50.671746 | orchestrator | 2025-04-01 19:35:50.671762 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-01 19:35:50.671779 | orchestrator | Tuesday 01 April 2025 19:35:50 +0000 (0:00:00.199) 0:01:22.965 ********* 2025-04-01 19:35:50.671808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'})  2025-04-01 19:35:50.672409 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'})  2025-04-01 19:35:50.672445 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:35:50.676452 | orchestrator | 2025-04-01 19:35:51.336693 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-01 19:35:51.336793 | orchestrator | Tuesday 01 April 2025 19:35:50 +0000 (0:00:00.180) 0:01:23.146 ********* 2025-04-01 19:35:51.336822 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:35:51.337078 | orchestrator |  "lvm_report": { 2025-04-01 19:35:51.337464 | orchestrator |  "lv": [ 2025-04-01 19:35:51.337939 | orchestrator |  { 2025-04-01 19:35:51.338197 | orchestrator |  "lv_name": "osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7", 2025-04-01 19:35:51.340137 | orchestrator |  "vg_name": "ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7" 2025-04-01 19:35:51.341648 | orchestrator |  }, 2025-04-01 19:35:51.342616 | orchestrator |  { 2025-04-01 19:35:51.344446 | orchestrator |  "lv_name": "osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050", 2025-04-01 19:35:51.345080 | orchestrator |  "vg_name": "ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050" 2025-04-01 19:35:51.347952 | orchestrator |  } 2025-04-01 19:35:51.350640 | orchestrator |  ], 2025-04-01 19:35:51.352874 | orchestrator |  "pv": [ 2025-04-01 19:35:51.356765 | orchestrator |  { 2025-04-01 19:35:51.356942 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-01 19:35:51.356964 | orchestrator |  "vg_name": "ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7" 2025-04-01 19:35:51.356979 | orchestrator |  }, 2025-04-01 19:35:51.356991 | orchestrator |  { 2025-04-01 19:35:51.357009 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-01 19:35:51.358912 | orchestrator |  "vg_name": "ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050" 2025-04-01 19:35:51.365107 | orchestrator |  } 2025-04-01 19:35:51.366222 | orchestrator |  ] 2025-04-01 19:35:51.366249 | orchestrator |  } 2025-04-01 19:35:51.367089 | orchestrator | } 2025-04-01 19:35:51.367825 | orchestrator | 2025-04-01 19:35:51.369236 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:35:51.369275 | orchestrator | 2025-04-01 19:35:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:35:51.369625 | orchestrator | 2025-04-01 19:35:51 | INFO  | Please wait and do not abort execution. 2025-04-01 19:35:51.369655 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-01 19:35:51.370364 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-01 19:35:51.370898 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-01 19:35:51.371424 | orchestrator | 2025-04-01 19:35:51.371780 | orchestrator | 2025-04-01 19:35:51.372611 | orchestrator | 2025-04-01 19:35:51.372996 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:35:51.373468 | orchestrator | Tuesday 01 April 2025 19:35:51 +0000 (0:00:00.666) 0:01:23.813 ********* 2025-04-01 19:35:51.373787 | orchestrator | =============================================================================== 2025-04-01 19:35:51.374159 | orchestrator | Create block VGs -------------------------------------------------------- 5.52s 2025-04-01 19:35:51.374905 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2025-04-01 19:35:51.375091 | orchestrator | Print LVM report data --------------------------------------------------- 2.42s 2025-04-01 19:35:51.375587 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.05s 2025-04-01 19:35:51.376082 | orchestrator | Add known links to the list of available block devices ------------------ 1.93s 2025-04-01 19:35:51.376674 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.81s 2025-04-01 19:35:51.377251 | orchestrator | Add known partitions to the list of available block devices ------------- 1.66s 2025-04-01 19:35:51.377556 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2025-04-01 19:35:51.378233 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-04-01 19:35:51.378491 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2025-04-01 19:35:51.378895 | orchestrator | Add known partitions to the list of available block devices ------------- 1.24s 2025-04-01 19:35:51.379359 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.20s 2025-04-01 19:35:51.379736 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-04-01 19:35:51.380135 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.93s 2025-04-01 19:35:51.380569 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2025-04-01 19:35:51.381133 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-04-01 19:35:51.381886 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-04-01 19:35:51.382652 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.82s 2025-04-01 19:35:51.383226 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.82s 2025-04-01 19:35:51.383959 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.81s 2025-04-01 19:35:53.588947 | orchestrator | 2025-04-01 19:35:53 | INFO  | Task c91f1059-f180-4ce1-8d98-f6da4b9ffc33 (facts) was prepared for execution. 2025-04-01 19:35:57.198652 | orchestrator | 2025-04-01 19:35:53 | INFO  | It takes a moment until task c91f1059-f180-4ce1-8d98-f6da4b9ffc33 (facts) has been started and output is visible here. 2025-04-01 19:35:57.198788 | orchestrator | 2025-04-01 19:35:57.199516 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-01 19:35:57.203831 | orchestrator | 2025-04-01 19:35:57.205877 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-01 19:35:57.206860 | orchestrator | Tuesday 01 April 2025 19:35:57 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-04-01 19:35:58.454955 | orchestrator | ok: [testbed-manager] 2025-04-01 19:35:58.455520 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:35:58.455834 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:35:58.456622 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:35:58.456952 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:35:58.458162 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:35:58.459247 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:35:58.459746 | orchestrator | 2025-04-01 19:35:58.460587 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-01 19:35:58.461087 | orchestrator | Tuesday 01 April 2025 19:35:58 +0000 (0:00:01.251) 0:00:01.503 ********* 2025-04-01 19:35:58.753448 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:35:58.857740 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:35:58.964058 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:35:59.060110 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:35:59.185122 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:36:00.054209 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:36:00.056757 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:36:00.064923 | orchestrator | 2025-04-01 19:36:00.064968 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-01 19:36:00.064993 | orchestrator | 2025-04-01 19:36:00.067126 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-01 19:36:00.069537 | orchestrator | Tuesday 01 April 2025 19:36:00 +0000 (0:00:01.606) 0:00:03.110 ********* 2025-04-01 19:36:04.309807 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:36:04.310142 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:36:04.310899 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:36:04.311279 | orchestrator | ok: [testbed-manager] 2025-04-01 19:36:04.314683 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:36:04.315979 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:36:04.316181 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:36:04.316560 | orchestrator | 2025-04-01 19:36:04.316851 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-01 19:36:04.317695 | orchestrator | 2025-04-01 19:36:04.317950 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-01 19:36:04.320527 | orchestrator | Tuesday 01 April 2025 19:36:04 +0000 (0:00:04.257) 0:00:07.367 ********* 2025-04-01 19:36:04.683396 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:36:04.771623 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:36:04.866866 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:36:04.948827 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:36:05.049583 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:36:05.091123 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:36:05.092110 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:36:05.092137 | orchestrator | 2025-04-01 19:36:05.092156 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:36:05.092189 | orchestrator | 2025-04-01 19:36:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-01 19:36:05.092570 | orchestrator | 2025-04-01 19:36:05 | INFO  | Please wait and do not abort execution. 2025-04-01 19:36:05.092598 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.093130 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.093980 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.095399 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.095791 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.096490 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.096816 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:36:05.097636 | orchestrator | 2025-04-01 19:36:05.098490 | orchestrator | Tuesday 01 April 2025 19:36:05 +0000 (0:00:00.782) 0:00:08.149 ********* 2025-04-01 19:36:05.099012 | orchestrator | =============================================================================== 2025-04-01 19:36:05.099617 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.26s 2025-04-01 19:36:05.099645 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.61s 2025-04-01 19:36:05.099903 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2025-04-01 19:36:05.100101 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.78s 2025-04-01 19:36:05.772928 | orchestrator | 2025-04-01 19:36:05.777860 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Apr 1 19:36:05 UTC 2025 2025-04-01 19:36:07.448074 | orchestrator | 2025-04-01 19:36:07.448200 | orchestrator | 2025-04-01 19:36:07 | INFO  | Collection nutshell is prepared for execution 2025-04-01 19:36:07.452547 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [0] - dotfiles 2025-04-01 19:36:07.452590 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [0] - homer 2025-04-01 19:36:07.454013 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [0] - netdata 2025-04-01 19:36:07.454082 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [0] - openstackclient 2025-04-01 19:36:07.454096 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [0] - phpmyadmin 2025-04-01 19:36:07.454108 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [0] - common 2025-04-01 19:36:07.454127 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [1] -- loadbalancer 2025-04-01 19:36:07.454906 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [2] --- opensearch 2025-04-01 19:36:07.454933 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [2] --- mariadb-ng 2025-04-01 19:36:07.454947 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [3] ---- horizon 2025-04-01 19:36:07.454961 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [3] ---- keystone 2025-04-01 19:36:07.454975 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [4] ----- neutron 2025-04-01 19:36:07.454989 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [5] ------ wait-for-nova 2025-04-01 19:36:07.455004 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [5] ------ octavia 2025-04-01 19:36:07.455022 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [4] ----- barbican 2025-04-01 19:36:07.455467 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [4] ----- designate 2025-04-01 19:36:07.455492 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [4] ----- ironic 2025-04-01 19:36:07.455505 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [4] ----- placement 2025-04-01 19:36:07.455517 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [4] ----- magnum 2025-04-01 19:36:07.455599 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [1] -- openvswitch 2025-04-01 19:36:07.455619 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [2] --- ovn 2025-04-01 19:36:07.455677 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [1] -- memcached 2025-04-01 19:36:07.455692 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [1] -- redis 2025-04-01 19:36:07.455705 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [1] -- rabbitmq-ng 2025-04-01 19:36:07.455717 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [0] - kubernetes 2025-04-01 19:36:07.455730 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [1] -- kubeconfig 2025-04-01 19:36:07.455742 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [1] -- copy-kubeconfig 2025-04-01 19:36:07.455759 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [0] - ceph 2025-04-01 19:36:07.456911 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [1] -- ceph-pools 2025-04-01 19:36:07.457201 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [2] --- copy-ceph-keys 2025-04-01 19:36:07.457226 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [3] ---- cephclient 2025-04-01 19:36:07.457240 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-01 19:36:07.457253 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [4] ----- wait-for-keystone 2025-04-01 19:36:07.457265 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-01 19:36:07.457299 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [5] ------ glance 2025-04-01 19:36:07.457349 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [5] ------ cinder 2025-04-01 19:36:07.457364 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [5] ------ nova 2025-04-01 19:36:07.457382 | orchestrator | 2025-04-01 19:36:07 | INFO  | A [4] ----- prometheus 2025-04-01 19:36:07.605084 | orchestrator | 2025-04-01 19:36:07 | INFO  | D [5] ------ grafana 2025-04-01 19:36:07.605138 | orchestrator | 2025-04-01 19:36:07 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-01 19:36:07.606386 | orchestrator | 2025-04-01 19:36:07 | INFO  | Tasks are running in the background 2025-04-01 19:36:09.720698 | orchestrator | 2025-04-01 19:36:09 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-01 19:36:11.869966 | orchestrator | 2025-04-01 19:36:11 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:11.872736 | orchestrator | 2025-04-01 19:36:11 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:11.873616 | orchestrator | 2025-04-01 19:36:11 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:11.874522 | orchestrator | 2025-04-01 19:36:11 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:11.878059 | orchestrator | 2025-04-01 19:36:11 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:14.927449 | orchestrator | 2025-04-01 19:36:11 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:14.927560 | orchestrator | 2025-04-01 19:36:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:14.927596 | orchestrator | 2025-04-01 19:36:14 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:14.931203 | orchestrator | 2025-04-01 19:36:14 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:14.934771 | orchestrator | 2025-04-01 19:36:14 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:14.935983 | orchestrator | 2025-04-01 19:36:14 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:14.937851 | orchestrator | 2025-04-01 19:36:14 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:14.938870 | orchestrator | 2025-04-01 19:36:14 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:14.938950 | orchestrator | 2025-04-01 19:36:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:18.015678 | orchestrator | 2025-04-01 19:36:18 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:18.015833 | orchestrator | 2025-04-01 19:36:18 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:18.015860 | orchestrator | 2025-04-01 19:36:18 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:18.020667 | orchestrator | 2025-04-01 19:36:18 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:18.026150 | orchestrator | 2025-04-01 19:36:18 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:18.035679 | orchestrator | 2025-04-01 19:36:18 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:21.101738 | orchestrator | 2025-04-01 19:36:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:21.101851 | orchestrator | 2025-04-01 19:36:21 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:21.101955 | orchestrator | 2025-04-01 19:36:21 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:21.109347 | orchestrator | 2025-04-01 19:36:21 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:21.112925 | orchestrator | 2025-04-01 19:36:21 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:24.187928 | orchestrator | 2025-04-01 19:36:21 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:24.188068 | orchestrator | 2025-04-01 19:36:21 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:24.188088 | orchestrator | 2025-04-01 19:36:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:24.188119 | orchestrator | 2025-04-01 19:36:24 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:24.188199 | orchestrator | 2025-04-01 19:36:24 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:24.188585 | orchestrator | 2025-04-01 19:36:24 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:24.189059 | orchestrator | 2025-04-01 19:36:24 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:24.189466 | orchestrator | 2025-04-01 19:36:24 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:24.190077 | orchestrator | 2025-04-01 19:36:24 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:27.264872 | orchestrator | 2025-04-01 19:36:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:27.264966 | orchestrator | 2025-04-01 19:36:27 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:27.267050 | orchestrator | 2025-04-01 19:36:27 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:27.267080 | orchestrator | 2025-04-01 19:36:27 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:27.267100 | orchestrator | 2025-04-01 19:36:27 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:27.270642 | orchestrator | 2025-04-01 19:36:27 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:27.270675 | orchestrator | 2025-04-01 19:36:27 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:30.317724 | orchestrator | 2025-04-01 19:36:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:30.317868 | orchestrator | 2025-04-01 19:36:30 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state STARTED 2025-04-01 19:36:30.318913 | orchestrator | 2025-04-01 19:36:30 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:30.321098 | orchestrator | 2025-04-01 19:36:30 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:30.321650 | orchestrator | 2025-04-01 19:36:30 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:30.323295 | orchestrator | 2025-04-01 19:36:30 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:30.325041 | orchestrator | 2025-04-01 19:36:30 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:33.389091 | orchestrator | 2025-04-01 19:36:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:33.389213 | orchestrator | 2025-04-01 19:36:33.389231 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-01 19:36:33.389265 | orchestrator | 2025-04-01 19:36:33.389280 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-01 19:36:33.389293 | orchestrator | Tuesday 01 April 2025 19:36:18 +0000 (0:00:00.777) 0:00:00.777 ********* 2025-04-01 19:36:33.389305 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:36:33.389369 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:36:33.389384 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:36:33.389397 | orchestrator | changed: [testbed-manager] 2025-04-01 19:36:33.389409 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:36:33.389422 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:36:33.389435 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:36:33.389447 | orchestrator | 2025-04-01 19:36:33.389460 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-01 19:36:33.389481 | orchestrator | Tuesday 01 April 2025 19:36:22 +0000 (0:00:03.993) 0:00:04.770 ********* 2025-04-01 19:36:33.389494 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-01 19:36:33.389507 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-01 19:36:33.389525 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-01 19:36:33.389538 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-01 19:36:33.389550 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-01 19:36:33.389563 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-01 19:36:33.389575 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-01 19:36:33.389588 | orchestrator | 2025-04-01 19:36:33.389601 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-01 19:36:33.389613 | orchestrator | Tuesday 01 April 2025 19:36:24 +0000 (0:00:02.725) 0:00:07.495 ********* 2025-04-01 19:36:33.389629 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:23.389073', 'end': '2025-04-01 19:36:23.395579', 'delta': '0:00:00.006506', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389651 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:23.631192', 'end': '2025-04-01 19:36:23.637754', 'delta': '0:00:00.006562', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389667 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:23.950772', 'end': '2025-04-01 19:36:23.955615', 'delta': '0:00:00.004843', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389715 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:23.433944', 'end': '2025-04-01 19:36:23.439504', 'delta': '0:00:00.005560', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389731 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:24.200441', 'end': '2025-04-01 19:36:24.205350', 'delta': '0:00:00.004909', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389745 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:24.427394', 'end': '2025-04-01 19:36:24.432148', 'delta': '0:00:00.004754', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389764 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-01 19:36:24.599175', 'end': '2025-04-01 19:36:24.605402', 'delta': '0:00:00.006227', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-01 19:36:33.389778 | orchestrator | 2025-04-01 19:36:33.389792 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-01 19:36:33.389806 | orchestrator | Tuesday 01 April 2025 19:36:27 +0000 (0:00:02.532) 0:00:10.028 ********* 2025-04-01 19:36:33.389819 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-01 19:36:33.389841 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-01 19:36:33.389854 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-01 19:36:33.389868 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-01 19:36:33.389882 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-01 19:36:33.389895 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-01 19:36:33.389909 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-01 19:36:33.389923 | orchestrator | 2025-04-01 19:36:33.389936 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:36:33.389951 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.389966 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.389980 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.390000 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.390086 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.390102 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.390115 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:36:33.390127 | orchestrator | 2025-04-01 19:36:33.390140 | orchestrator | Tuesday 01 April 2025 19:36:30 +0000 (0:00:03.376) 0:00:13.404 ********* 2025-04-01 19:36:33.390152 | orchestrator | =============================================================================== 2025-04-01 19:36:33.390165 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.99s 2025-04-01 19:36:33.390177 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.38s 2025-04-01 19:36:33.390190 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.73s 2025-04-01 19:36:33.390202 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.53s 2025-04-01 19:36:33.390218 | orchestrator | 2025-04-01 19:36:33 | INFO  | Task dabdf2ea-2827-420b-a9df-830e601c81a4 is in state SUCCESS 2025-04-01 19:36:33.392046 | orchestrator | 2025-04-01 19:36:33 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:33.393069 | orchestrator | 2025-04-01 19:36:33 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:33.393096 | orchestrator | 2025-04-01 19:36:33 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:33.393924 | orchestrator | 2025-04-01 19:36:33 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:33.396577 | orchestrator | 2025-04-01 19:36:33 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:36.480786 | orchestrator | 2025-04-01 19:36:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:36.480931 | orchestrator | 2025-04-01 19:36:36 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:36.486093 | orchestrator | 2025-04-01 19:36:36 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:36.490510 | orchestrator | 2025-04-01 19:36:36 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:36.497122 | orchestrator | 2025-04-01 19:36:36 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:36.508683 | orchestrator | 2025-04-01 19:36:36 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:36.516849 | orchestrator | 2025-04-01 19:36:36 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:39.610477 | orchestrator | 2025-04-01 19:36:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:39.610601 | orchestrator | 2025-04-01 19:36:39 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:39.613214 | orchestrator | 2025-04-01 19:36:39 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:39.618100 | orchestrator | 2025-04-01 19:36:39 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:39.621392 | orchestrator | 2025-04-01 19:36:39 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:39.628161 | orchestrator | 2025-04-01 19:36:39 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:39.633989 | orchestrator | 2025-04-01 19:36:39 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:42.766960 | orchestrator | 2025-04-01 19:36:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:42.767042 | orchestrator | 2025-04-01 19:36:42 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:45.870971 | orchestrator | 2025-04-01 19:36:42 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:45.871050 | orchestrator | 2025-04-01 19:36:42 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:45.871067 | orchestrator | 2025-04-01 19:36:42 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:45.871081 | orchestrator | 2025-04-01 19:36:42 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:45.871096 | orchestrator | 2025-04-01 19:36:42 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:45.871110 | orchestrator | 2025-04-01 19:36:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:45.871138 | orchestrator | 2025-04-01 19:36:45 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:45.872258 | orchestrator | 2025-04-01 19:36:45 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:45.875407 | orchestrator | 2025-04-01 19:36:45 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:45.875436 | orchestrator | 2025-04-01 19:36:45 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:45.875456 | orchestrator | 2025-04-01 19:36:45 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:48.954171 | orchestrator | 2025-04-01 19:36:45 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:48.954264 | orchestrator | 2025-04-01 19:36:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:48.954295 | orchestrator | 2025-04-01 19:36:48 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:48.956096 | orchestrator | 2025-04-01 19:36:48 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:48.956132 | orchestrator | 2025-04-01 19:36:48 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:48.958830 | orchestrator | 2025-04-01 19:36:48 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:48.961140 | orchestrator | 2025-04-01 19:36:48 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:48.961908 | orchestrator | 2025-04-01 19:36:48 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:52.020895 | orchestrator | 2025-04-01 19:36:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:52.021005 | orchestrator | 2025-04-01 19:36:52 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:52.024400 | orchestrator | 2025-04-01 19:36:52 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:52.026823 | orchestrator | 2025-04-01 19:36:52 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:52.036487 | orchestrator | 2025-04-01 19:36:52 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:52.037642 | orchestrator | 2025-04-01 19:36:52 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:52.042870 | orchestrator | 2025-04-01 19:36:52 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:55.122930 | orchestrator | 2025-04-01 19:36:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:55.122999 | orchestrator | 2025-04-01 19:36:55 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state STARTED 2025-04-01 19:36:55.132133 | orchestrator | 2025-04-01 19:36:55 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:55.134191 | orchestrator | 2025-04-01 19:36:55 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:55.139713 | orchestrator | 2025-04-01 19:36:55 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:55.142979 | orchestrator | 2025-04-01 19:36:55 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:55.145944 | orchestrator | 2025-04-01 19:36:55 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:36:58.206387 | orchestrator | 2025-04-01 19:36:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:36:58.206469 | orchestrator | 2025-04-01 19:36:58 | INFO  | Task b16e1ee6-b0d0-41b8-a748-e3fd16a1eb0f is in state SUCCESS 2025-04-01 19:36:58.207566 | orchestrator | 2025-04-01 19:36:58 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:36:58.209938 | orchestrator | 2025-04-01 19:36:58 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:36:58.214080 | orchestrator | 2025-04-01 19:36:58 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:36:58.215510 | orchestrator | 2025-04-01 19:36:58 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:36:58.216564 | orchestrator | 2025-04-01 19:36:58 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:01.314295 | orchestrator | 2025-04-01 19:36:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:01.314432 | orchestrator | 2025-04-01 19:37:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:01.315762 | orchestrator | 2025-04-01 19:37:01 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:01.315804 | orchestrator | 2025-04-01 19:37:01 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:01.316024 | orchestrator | 2025-04-01 19:37:01 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:01.319152 | orchestrator | 2025-04-01 19:37:01 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:01.328173 | orchestrator | 2025-04-01 19:37:01 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:04.378627 | orchestrator | 2025-04-01 19:37:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:04.378711 | orchestrator | 2025-04-01 19:37:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:04.378777 | orchestrator | 2025-04-01 19:37:04 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:04.379678 | orchestrator | 2025-04-01 19:37:04 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:04.381170 | orchestrator | 2025-04-01 19:37:04 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:04.381947 | orchestrator | 2025-04-01 19:37:04 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:04.382640 | orchestrator | 2025-04-01 19:37:04 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:07.455694 | orchestrator | 2025-04-01 19:37:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:07.455808 | orchestrator | 2025-04-01 19:37:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:07.456925 | orchestrator | 2025-04-01 19:37:07 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:07.458791 | orchestrator | 2025-04-01 19:37:07 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:07.459632 | orchestrator | 2025-04-01 19:37:07 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:07.462569 | orchestrator | 2025-04-01 19:37:07 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:07.463929 | orchestrator | 2025-04-01 19:37:07 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:07.464416 | orchestrator | 2025-04-01 19:37:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:10.534081 | orchestrator | 2025-04-01 19:37:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:10.536978 | orchestrator | 2025-04-01 19:37:10 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:10.538454 | orchestrator | 2025-04-01 19:37:10 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:10.543755 | orchestrator | 2025-04-01 19:37:10 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:10.546139 | orchestrator | 2025-04-01 19:37:10 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:10.548139 | orchestrator | 2025-04-01 19:37:10 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:13.661503 | orchestrator | 2025-04-01 19:37:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:13.661605 | orchestrator | 2025-04-01 19:37:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:13.669503 | orchestrator | 2025-04-01 19:37:13 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:13.669549 | orchestrator | 2025-04-01 19:37:13 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:16.745298 | orchestrator | 2025-04-01 19:37:13 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:16.745465 | orchestrator | 2025-04-01 19:37:13 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:16.745483 | orchestrator | 2025-04-01 19:37:13 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:16.745499 | orchestrator | 2025-04-01 19:37:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:16.745527 | orchestrator | 2025-04-01 19:37:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:16.752664 | orchestrator | 2025-04-01 19:37:16 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:16.756037 | orchestrator | 2025-04-01 19:37:16 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:16.756073 | orchestrator | 2025-04-01 19:37:16 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:16.756966 | orchestrator | 2025-04-01 19:37:16 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:16.758940 | orchestrator | 2025-04-01 19:37:16 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:19.815125 | orchestrator | 2025-04-01 19:37:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:19.815240 | orchestrator | 2025-04-01 19:37:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:19.817215 | orchestrator | 2025-04-01 19:37:19 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:19.820504 | orchestrator | 2025-04-01 19:37:19 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:19.821865 | orchestrator | 2025-04-01 19:37:19 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:19.824629 | orchestrator | 2025-04-01 19:37:19 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:19.826563 | orchestrator | 2025-04-01 19:37:19 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:22.917683 | orchestrator | 2025-04-01 19:37:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:22.917795 | orchestrator | 2025-04-01 19:37:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:22.920893 | orchestrator | 2025-04-01 19:37:22 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:22.920928 | orchestrator | 2025-04-01 19:37:22 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:22.923801 | orchestrator | 2025-04-01 19:37:22 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:22.933220 | orchestrator | 2025-04-01 19:37:22 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:22.939389 | orchestrator | 2025-04-01 19:37:22 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:26.015602 | orchestrator | 2025-04-01 19:37:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:26.015735 | orchestrator | 2025-04-01 19:37:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:26.019219 | orchestrator | 2025-04-01 19:37:26 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:26.019255 | orchestrator | 2025-04-01 19:37:26 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:26.019830 | orchestrator | 2025-04-01 19:37:26 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:26.023348 | orchestrator | 2025-04-01 19:37:26 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:26.028056 | orchestrator | 2025-04-01 19:37:26 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:29.081038 | orchestrator | 2025-04-01 19:37:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:29.081168 | orchestrator | 2025-04-01 19:37:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:29.084025 | orchestrator | 2025-04-01 19:37:29 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state STARTED 2025-04-01 19:37:29.088576 | orchestrator | 2025-04-01 19:37:29 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:29.094223 | orchestrator | 2025-04-01 19:37:29 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:29.097275 | orchestrator | 2025-04-01 19:37:29 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:29.100245 | orchestrator | 2025-04-01 19:37:29 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:29.100871 | orchestrator | 2025-04-01 19:37:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:32.165404 | orchestrator | 2025-04-01 19:37:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:32.165580 | orchestrator | 2025-04-01 19:37:32 | INFO  | Task 89110cab-8a68-4e9a-83f2-d609b68a95cd is in state SUCCESS 2025-04-01 19:37:32.166236 | orchestrator | 2025-04-01 19:37:32 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:32.167706 | orchestrator | 2025-04-01 19:37:32 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:32.168361 | orchestrator | 2025-04-01 19:37:32 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:32.168397 | orchestrator | 2025-04-01 19:37:32 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:35.238796 | orchestrator | 2025-04-01 19:37:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:35.238919 | orchestrator | 2025-04-01 19:37:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:35.242775 | orchestrator | 2025-04-01 19:37:35 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:35.247665 | orchestrator | 2025-04-01 19:37:35 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:35.248262 | orchestrator | 2025-04-01 19:37:35 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:35.250196 | orchestrator | 2025-04-01 19:37:35 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:38.324458 | orchestrator | 2025-04-01 19:37:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:38.324606 | orchestrator | 2025-04-01 19:37:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:38.327029 | orchestrator | 2025-04-01 19:37:38 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:38.327120 | orchestrator | 2025-04-01 19:37:38 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state STARTED 2025-04-01 19:37:38.330484 | orchestrator | 2025-04-01 19:37:38 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:38.331367 | orchestrator | 2025-04-01 19:37:38 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:41.385833 | orchestrator | 2025-04-01 19:37:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:41.385975 | orchestrator | 2025-04-01 19:37:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:41.388874 | orchestrator | 2025-04-01 19:37:41 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:41.391409 | orchestrator | 2025-04-01 19:37:41.391460 | orchestrator | 2025-04-01 19:37:41.391476 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-01 19:37:41.391491 | orchestrator | 2025-04-01 19:37:41.391505 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-01 19:37:41.391519 | orchestrator | Tuesday 01 April 2025 19:36:19 +0000 (0:00:01.022) 0:00:01.022 ********* 2025-04-01 19:37:41.391534 | orchestrator | ok: [testbed-manager] => { 2025-04-01 19:37:41.391551 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-01 19:37:41.391566 | orchestrator | } 2025-04-01 19:37:41.391580 | orchestrator | 2025-04-01 19:37:41.391595 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-01 19:37:41.391609 | orchestrator | Tuesday 01 April 2025 19:36:19 +0000 (0:00:00.410) 0:00:01.433 ********* 2025-04-01 19:37:41.391623 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.391637 | orchestrator | 2025-04-01 19:37:41.391652 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-01 19:37:41.391665 | orchestrator | Tuesday 01 April 2025 19:36:21 +0000 (0:00:01.577) 0:00:03.010 ********* 2025-04-01 19:37:41.391679 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-01 19:37:41.391693 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-01 19:37:41.391707 | orchestrator | 2025-04-01 19:37:41.391721 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-01 19:37:41.391735 | orchestrator | Tuesday 01 April 2025 19:36:23 +0000 (0:00:01.670) 0:00:04.681 ********* 2025-04-01 19:37:41.391749 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.391763 | orchestrator | 2025-04-01 19:37:41.391777 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-01 19:37:41.391791 | orchestrator | Tuesday 01 April 2025 19:36:26 +0000 (0:00:03.390) 0:00:08.071 ********* 2025-04-01 19:37:41.391804 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.391818 | orchestrator | 2025-04-01 19:37:41.391832 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-01 19:37:41.391846 | orchestrator | Tuesday 01 April 2025 19:36:28 +0000 (0:00:01.984) 0:00:10.056 ********* 2025-04-01 19:37:41.391860 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-01 19:37:41.391874 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.391888 | orchestrator | 2025-04-01 19:37:41.391902 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-01 19:37:41.391917 | orchestrator | Tuesday 01 April 2025 19:36:54 +0000 (0:00:26.248) 0:00:36.305 ********* 2025-04-01 19:37:41.391933 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.391948 | orchestrator | 2025-04-01 19:37:41.391964 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:37:41.391979 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.391996 | orchestrator | 2025-04-01 19:37:41.392011 | orchestrator | Tuesday 01 April 2025 19:36:57 +0000 (0:00:02.789) 0:00:39.094 ********* 2025-04-01 19:37:41.392026 | orchestrator | =============================================================================== 2025-04-01 19:37:41.392042 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.25s 2025-04-01 19:37:41.392057 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.39s 2025-04-01 19:37:41.392093 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.79s 2025-04-01 19:37:41.392115 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.98s 2025-04-01 19:37:41.392131 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.67s 2025-04-01 19:37:41.392146 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.58s 2025-04-01 19:37:41.392162 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.41s 2025-04-01 19:37:41.392177 | orchestrator | 2025-04-01 19:37:41.392191 | orchestrator | 2025-04-01 19:37:41.392207 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-01 19:37:41.392222 | orchestrator | 2025-04-01 19:37:41.392237 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-01 19:37:41.392252 | orchestrator | Tuesday 01 April 2025 19:36:18 +0000 (0:00:00.495) 0:00:00.495 ********* 2025-04-01 19:37:41.392268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-01 19:37:41.392285 | orchestrator | 2025-04-01 19:37:41.392299 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-01 19:37:41.392313 | orchestrator | Tuesday 01 April 2025 19:36:18 +0000 (0:00:00.420) 0:00:00.916 ********* 2025-04-01 19:37:41.392347 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-01 19:37:41.392362 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-01 19:37:41.392376 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-01 19:37:41.392390 | orchestrator | 2025-04-01 19:37:41.392405 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-01 19:37:41.392419 | orchestrator | Tuesday 01 April 2025 19:36:21 +0000 (0:00:02.416) 0:00:03.332 ********* 2025-04-01 19:37:41.392433 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.392447 | orchestrator | 2025-04-01 19:37:41.392461 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-01 19:37:41.392475 | orchestrator | Tuesday 01 April 2025 19:36:24 +0000 (0:00:02.888) 0:00:06.220 ********* 2025-04-01 19:37:41.392489 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-01 19:37:41.392503 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.392518 | orchestrator | 2025-04-01 19:37:41.392542 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-01 19:37:41.392557 | orchestrator | Tuesday 01 April 2025 19:37:16 +0000 (0:00:52.328) 0:00:58.549 ********* 2025-04-01 19:37:41.392571 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.392585 | orchestrator | 2025-04-01 19:37:41.392599 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-01 19:37:41.392613 | orchestrator | Tuesday 01 April 2025 19:37:18 +0000 (0:00:02.341) 0:01:00.891 ********* 2025-04-01 19:37:41.392627 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.392641 | orchestrator | 2025-04-01 19:37:41.392655 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-01 19:37:41.392669 | orchestrator | Tuesday 01 April 2025 19:37:20 +0000 (0:00:01.818) 0:01:02.710 ********* 2025-04-01 19:37:41.392683 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.392697 | orchestrator | 2025-04-01 19:37:41.392711 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-01 19:37:41.392725 | orchestrator | Tuesday 01 April 2025 19:37:24 +0000 (0:00:03.473) 0:01:06.183 ********* 2025-04-01 19:37:41.392739 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.392753 | orchestrator | 2025-04-01 19:37:41.392767 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-01 19:37:41.392781 | orchestrator | Tuesday 01 April 2025 19:37:26 +0000 (0:00:02.172) 0:01:08.356 ********* 2025-04-01 19:37:41.392795 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.392817 | orchestrator | 2025-04-01 19:37:41.392831 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-01 19:37:41.392845 | orchestrator | Tuesday 01 April 2025 19:37:27 +0000 (0:00:01.241) 0:01:09.597 ********* 2025-04-01 19:37:41.392859 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.392873 | orchestrator | 2025-04-01 19:37:41.392887 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:37:41.392901 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.392916 | orchestrator | 2025-04-01 19:37:41.392930 | orchestrator | Tuesday 01 April 2025 19:37:28 +0000 (0:00:00.671) 0:01:10.269 ********* 2025-04-01 19:37:41.392944 | orchestrator | =============================================================================== 2025-04-01 19:37:41.392958 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 52.33s 2025-04-01 19:37:41.392972 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.47s 2025-04-01 19:37:41.392986 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.89s 2025-04-01 19:37:41.393006 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.42s 2025-04-01 19:37:41.393020 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.34s 2025-04-01 19:37:41.393035 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.17s 2025-04-01 19:37:41.393049 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.82s 2025-04-01 19:37:41.393063 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.24s 2025-04-01 19:37:41.393077 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.67s 2025-04-01 19:37:41.393091 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.42s 2025-04-01 19:37:41.393105 | orchestrator | 2025-04-01 19:37:41.393119 | orchestrator | 2025-04-01 19:37:41.393133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:37:41.393147 | orchestrator | 2025-04-01 19:37:41.393162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:37:41.393176 | orchestrator | Tuesday 01 April 2025 19:36:18 +0000 (0:00:00.709) 0:00:00.709 ********* 2025-04-01 19:37:41.393190 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-01 19:37:41.393204 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-01 19:37:41.393218 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-01 19:37:41.393232 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-01 19:37:41.393246 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-01 19:37:41.393260 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-01 19:37:41.393274 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-01 19:37:41.393288 | orchestrator | 2025-04-01 19:37:41.393302 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-01 19:37:41.393316 | orchestrator | 2025-04-01 19:37:41.393346 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-01 19:37:41.393360 | orchestrator | Tuesday 01 April 2025 19:36:20 +0000 (0:00:02.756) 0:00:03.465 ********* 2025-04-01 19:37:41.393387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:37:41.393403 | orchestrator | 2025-04-01 19:37:41.393418 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-01 19:37:41.393432 | orchestrator | Tuesday 01 April 2025 19:36:24 +0000 (0:00:03.740) 0:00:07.206 ********* 2025-04-01 19:37:41.393446 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:37:41.393460 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:37:41.393480 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:37:41.393494 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.393509 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:37:41.393523 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:37:41.393536 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:37:41.393550 | orchestrator | 2025-04-01 19:37:41.393565 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-01 19:37:41.393585 | orchestrator | Tuesday 01 April 2025 19:36:27 +0000 (0:00:02.886) 0:00:10.092 ********* 2025-04-01 19:37:41.393600 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.393614 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:37:41.393628 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:37:41.393642 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:37:41.393656 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:37:41.393670 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:37:41.393684 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:37:41.393697 | orchestrator | 2025-04-01 19:37:41.393712 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-01 19:37:41.393726 | orchestrator | Tuesday 01 April 2025 19:36:30 +0000 (0:00:03.449) 0:00:13.542 ********* 2025-04-01 19:37:41.393740 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.393754 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:37:41.393768 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:37:41.393787 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:37:41.393801 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:37:41.393816 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:37:41.393829 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:37:41.393843 | orchestrator | 2025-04-01 19:37:41.393858 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-01 19:37:41.393872 | orchestrator | Tuesday 01 April 2025 19:36:33 +0000 (0:00:02.706) 0:00:16.248 ********* 2025-04-01 19:37:41.393886 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:37:41.393900 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:37:41.393914 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:37:41.393928 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:37:41.393942 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:37:41.393956 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:37:41.393970 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.393984 | orchestrator | 2025-04-01 19:37:41.393998 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-01 19:37:41.394012 | orchestrator | Tuesday 01 April 2025 19:36:44 +0000 (0:00:10.522) 0:00:26.771 ********* 2025-04-01 19:37:41.394119 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:37:41.394134 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:37:41.394148 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:37:41.394161 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:37:41.394176 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:37:41.394190 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:37:41.394204 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.394217 | orchestrator | 2025-04-01 19:37:41.394232 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-01 19:37:41.394246 | orchestrator | Tuesday 01 April 2025 19:37:04 +0000 (0:00:19.924) 0:00:46.695 ********* 2025-04-01 19:37:41.394261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:37:41.394280 | orchestrator | 2025-04-01 19:37:41.394294 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-01 19:37:41.394308 | orchestrator | Tuesday 01 April 2025 19:37:06 +0000 (0:00:02.661) 0:00:49.357 ********* 2025-04-01 19:37:41.394350 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-01 19:37:41.394365 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-01 19:37:41.394379 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-01 19:37:41.394406 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-01 19:37:41.394420 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-01 19:37:41.394434 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-01 19:37:41.394448 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-01 19:37:41.394462 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-01 19:37:41.394476 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-01 19:37:41.394490 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-01 19:37:41.394504 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-01 19:37:41.394518 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-01 19:37:41.394532 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-01 19:37:41.394545 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-01 19:37:41.394559 | orchestrator | 2025-04-01 19:37:41.394573 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-01 19:37:41.394588 | orchestrator | Tuesday 01 April 2025 19:37:15 +0000 (0:00:08.279) 0:00:57.636 ********* 2025-04-01 19:37:41.394602 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.394616 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:37:41.394630 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:37:41.394644 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:37:41.394658 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:37:41.394672 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:37:41.394686 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:37:41.394700 | orchestrator | 2025-04-01 19:37:41.394714 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-01 19:37:41.394728 | orchestrator | Tuesday 01 April 2025 19:37:17 +0000 (0:00:02.599) 0:01:00.236 ********* 2025-04-01 19:37:41.394742 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:37:41.394756 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.394770 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:37:41.394783 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:37:41.394797 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:37:41.394811 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:37:41.394825 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:37:41.394838 | orchestrator | 2025-04-01 19:37:41.394852 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-01 19:37:41.394871 | orchestrator | Tuesday 01 April 2025 19:37:22 +0000 (0:00:04.660) 0:01:04.897 ********* 2025-04-01 19:37:41.394886 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:37:41.394900 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.394914 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:37:41.394928 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:37:41.394949 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:37:41.394963 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:37:41.394977 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:37:41.394991 | orchestrator | 2025-04-01 19:37:41.395005 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-01 19:37:41.395019 | orchestrator | Tuesday 01 April 2025 19:37:26 +0000 (0:00:03.748) 0:01:08.645 ********* 2025-04-01 19:37:41.395033 | orchestrator | ok: [testbed-manager] 2025-04-01 19:37:41.395047 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:37:41.395061 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:37:41.395074 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:37:41.395088 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:37:41.395101 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:37:41.395115 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:37:41.395129 | orchestrator | 2025-04-01 19:37:41.395143 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-01 19:37:41.395157 | orchestrator | Tuesday 01 April 2025 19:37:30 +0000 (0:00:04.066) 0:01:12.712 ********* 2025-04-01 19:37:41.395171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-01 19:37:41.395193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:37:41.395208 | orchestrator | 2025-04-01 19:37:41.395222 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-01 19:37:41.395236 | orchestrator | Tuesday 01 April 2025 19:37:33 +0000 (0:00:03.014) 0:01:15.726 ********* 2025-04-01 19:37:41.395249 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.395263 | orchestrator | 2025-04-01 19:37:41.395277 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-01 19:37:41.395291 | orchestrator | Tuesday 01 April 2025 19:37:35 +0000 (0:00:02.741) 0:01:18.467 ********* 2025-04-01 19:37:41.395305 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:37:41.395337 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:37:41.395353 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:37:41.395367 | orchestrator | changed: [testbed-manager] 2025-04-01 19:37:41.395382 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:37:41.395404 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:37:41.395420 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:37:41.395434 | orchestrator | 2025-04-01 19:37:41.395448 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:37:41.395462 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395477 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395491 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395510 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395525 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395539 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395553 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:37:41.395567 | orchestrator | 2025-04-01 19:37:41.395581 | orchestrator | Tuesday 01 April 2025 19:37:38 +0000 (0:00:02.983) 0:01:21.451 ********* 2025-04-01 19:37:41.395595 | orchestrator | =============================================================================== 2025-04-01 19:37:41.395609 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.92s 2025-04-01 19:37:41.395623 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.52s 2025-04-01 19:37:41.395637 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.28s 2025-04-01 19:37:41.395651 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 4.66s 2025-04-01 19:37:41.395665 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 4.07s 2025-04-01 19:37:41.395679 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.75s 2025-04-01 19:37:41.395692 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.74s 2025-04-01 19:37:41.395706 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.45s 2025-04-01 19:37:41.395720 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 3.01s 2025-04-01 19:37:41.395733 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.99s 2025-04-01 19:37:41.395755 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.89s 2025-04-01 19:37:41.395770 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.76s 2025-04-01 19:37:41.395783 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.74s 2025-04-01 19:37:41.395797 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.71s 2025-04-01 19:37:41.395817 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.66s 2025-04-01 19:37:41.396677 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.60s 2025-04-01 19:37:41.396780 | orchestrator | 2025-04-01 19:37:41 | INFO  | Task 6306562b-8ac1-4a37-a010-5f722c6d73b5 is in state SUCCESS 2025-04-01 19:37:41.396800 | orchestrator | 2025-04-01 19:37:41 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:41.396831 | orchestrator | 2025-04-01 19:37:41 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:41.396904 | orchestrator | 2025-04-01 19:37:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:44.443301 | orchestrator | 2025-04-01 19:37:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:44.444634 | orchestrator | 2025-04-01 19:37:44 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:44.446811 | orchestrator | 2025-04-01 19:37:44 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:44.448911 | orchestrator | 2025-04-01 19:37:44 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:44.449523 | orchestrator | 2025-04-01 19:37:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:47.493589 | orchestrator | 2025-04-01 19:37:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:47.496576 | orchestrator | 2025-04-01 19:37:47 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:47.498502 | orchestrator | 2025-04-01 19:37:47 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:47.498541 | orchestrator | 2025-04-01 19:37:47 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:50.555390 | orchestrator | 2025-04-01 19:37:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:50.555513 | orchestrator | 2025-04-01 19:37:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:50.556187 | orchestrator | 2025-04-01 19:37:50 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:50.562599 | orchestrator | 2025-04-01 19:37:50 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state STARTED 2025-04-01 19:37:50.564289 | orchestrator | 2025-04-01 19:37:50 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:53.650806 | orchestrator | 2025-04-01 19:37:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:53.650950 | orchestrator | 2025-04-01 19:37:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:53.665133 | orchestrator | 2025-04-01 19:37:53 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:53.665175 | orchestrator | 2025-04-01 19:37:53 | INFO  | Task 532f29a0-5038-4caf-b605-f53f39eff000 is in state SUCCESS 2025-04-01 19:37:53.665199 | orchestrator | 2025-04-01 19:37:53 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:56.723547 | orchestrator | 2025-04-01 19:37:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:56.723700 | orchestrator | 2025-04-01 19:37:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:56.730627 | orchestrator | 2025-04-01 19:37:56 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:37:56.734968 | orchestrator | 2025-04-01 19:37:56 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:37:59.792034 | orchestrator | 2025-04-01 19:37:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:37:59.792161 | orchestrator | 2025-04-01 19:37:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:37:59.797278 | orchestrator | 2025-04-01 19:37:59 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:02.838617 | orchestrator | 2025-04-01 19:37:59 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:02.838732 | orchestrator | 2025-04-01 19:37:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:02.838768 | orchestrator | 2025-04-01 19:38:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:02.839488 | orchestrator | 2025-04-01 19:38:02 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:02.839520 | orchestrator | 2025-04-01 19:38:02 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:05.889743 | orchestrator | 2025-04-01 19:38:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:05.889877 | orchestrator | 2025-04-01 19:38:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:08.932826 | orchestrator | 2025-04-01 19:38:05 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:08.932951 | orchestrator | 2025-04-01 19:38:05 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:08.932972 | orchestrator | 2025-04-01 19:38:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:08.933004 | orchestrator | 2025-04-01 19:38:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:08.935242 | orchestrator | 2025-04-01 19:38:08 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:08.939054 | orchestrator | 2025-04-01 19:38:08 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:11.996795 | orchestrator | 2025-04-01 19:38:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:11.996934 | orchestrator | 2025-04-01 19:38:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:12.002998 | orchestrator | 2025-04-01 19:38:11 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:12.012521 | orchestrator | 2025-04-01 19:38:12 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:15.057830 | orchestrator | 2025-04-01 19:38:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:15.057950 | orchestrator | 2025-04-01 19:38:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:15.058087 | orchestrator | 2025-04-01 19:38:15 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:15.060016 | orchestrator | 2025-04-01 19:38:15 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:18.118199 | orchestrator | 2025-04-01 19:38:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:18.118399 | orchestrator | 2025-04-01 19:38:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:18.118697 | orchestrator | 2025-04-01 19:38:18 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:18.119705 | orchestrator | 2025-04-01 19:38:18 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:21.184237 | orchestrator | 2025-04-01 19:38:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:21.184420 | orchestrator | 2025-04-01 19:38:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:21.184759 | orchestrator | 2025-04-01 19:38:21 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:21.188789 | orchestrator | 2025-04-01 19:38:21 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:24.275652 | orchestrator | 2025-04-01 19:38:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:24.275810 | orchestrator | 2025-04-01 19:38:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:24.276537 | orchestrator | 2025-04-01 19:38:24 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:24.277153 | orchestrator | 2025-04-01 19:38:24 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:27.325419 | orchestrator | 2025-04-01 19:38:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:27.325544 | orchestrator | 2025-04-01 19:38:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:27.326360 | orchestrator | 2025-04-01 19:38:27 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:27.327687 | orchestrator | 2025-04-01 19:38:27 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:30.370960 | orchestrator | 2025-04-01 19:38:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:30.371096 | orchestrator | 2025-04-01 19:38:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:30.372893 | orchestrator | 2025-04-01 19:38:30 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:30.373975 | orchestrator | 2025-04-01 19:38:30 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:33.418980 | orchestrator | 2025-04-01 19:38:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:33.419091 | orchestrator | 2025-04-01 19:38:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:33.419163 | orchestrator | 2025-04-01 19:38:33 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:33.420052 | orchestrator | 2025-04-01 19:38:33 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:36.463784 | orchestrator | 2025-04-01 19:38:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:36.463922 | orchestrator | 2025-04-01 19:38:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:36.467119 | orchestrator | 2025-04-01 19:38:36 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:39.518268 | orchestrator | 2025-04-01 19:38:36 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:39.518417 | orchestrator | 2025-04-01 19:38:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:39.518456 | orchestrator | 2025-04-01 19:38:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:39.518744 | orchestrator | 2025-04-01 19:38:39 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:39.518777 | orchestrator | 2025-04-01 19:38:39 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:42.566271 | orchestrator | 2025-04-01 19:38:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:42.566435 | orchestrator | 2025-04-01 19:38:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:42.566518 | orchestrator | 2025-04-01 19:38:42 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:42.566542 | orchestrator | 2025-04-01 19:38:42 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:45.615725 | orchestrator | 2025-04-01 19:38:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:45.615850 | orchestrator | 2025-04-01 19:38:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:45.617343 | orchestrator | 2025-04-01 19:38:45 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:45.620011 | orchestrator | 2025-04-01 19:38:45 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:45.620600 | orchestrator | 2025-04-01 19:38:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:48.668423 | orchestrator | 2025-04-01 19:38:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:48.668630 | orchestrator | 2025-04-01 19:38:48 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:48.669282 | orchestrator | 2025-04-01 19:38:48 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:51.713701 | orchestrator | 2025-04-01 19:38:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:51.713827 | orchestrator | 2025-04-01 19:38:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:51.716217 | orchestrator | 2025-04-01 19:38:51 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:51.721305 | orchestrator | 2025-04-01 19:38:51 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:54.773566 | orchestrator | 2025-04-01 19:38:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:54.773695 | orchestrator | 2025-04-01 19:38:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:57.832254 | orchestrator | 2025-04-01 19:38:54 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:57.832409 | orchestrator | 2025-04-01 19:38:54 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state STARTED 2025-04-01 19:38:57.832431 | orchestrator | 2025-04-01 19:38:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:38:57.832465 | orchestrator | 2025-04-01 19:38:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:38:57.833892 | orchestrator | 2025-04-01 19:38:57 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:38:57.834852 | orchestrator | 2025-04-01 19:38:57 | INFO  | Task 50db0d34-b7f1-469c-80c0-6fceee75df94 is in state SUCCESS 2025-04-01 19:38:57.837597 | orchestrator | 2025-04-01 19:38:57.837640 | orchestrator | 2025-04-01 19:38:57.837655 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-01 19:38:57.837670 | orchestrator | 2025-04-01 19:38:57.837685 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-01 19:38:57.837719 | orchestrator | Tuesday 01 April 2025 19:36:41 +0000 (0:00:00.632) 0:00:00.632 ********* 2025-04-01 19:38:57.837734 | orchestrator | ok: [testbed-manager] 2025-04-01 19:38:57.837749 | orchestrator | 2025-04-01 19:38:57.837763 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-01 19:38:57.837777 | orchestrator | Tuesday 01 April 2025 19:36:43 +0000 (0:00:02.296) 0:00:02.929 ********* 2025-04-01 19:38:57.837792 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-01 19:38:57.837812 | orchestrator | 2025-04-01 19:38:57.837827 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-01 19:38:57.837841 | orchestrator | Tuesday 01 April 2025 19:36:45 +0000 (0:00:01.242) 0:00:04.172 ********* 2025-04-01 19:38:57.837855 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.837869 | orchestrator | 2025-04-01 19:38:57.837883 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-01 19:38:57.837897 | orchestrator | Tuesday 01 April 2025 19:36:46 +0000 (0:00:01.881) 0:00:06.054 ********* 2025-04-01 19:38:57.837911 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-01 19:38:57.837926 | orchestrator | ok: [testbed-manager] 2025-04-01 19:38:57.837940 | orchestrator | 2025-04-01 19:38:57.837954 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-01 19:38:57.837968 | orchestrator | Tuesday 01 April 2025 19:37:48 +0000 (0:01:01.855) 0:01:07.910 ********* 2025-04-01 19:38:57.837982 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.837995 | orchestrator | 2025-04-01 19:38:57.838009 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:38:57.838090 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:38:57.838107 | orchestrator | 2025-04-01 19:38:57.838122 | orchestrator | Tuesday 01 April 2025 19:37:52 +0000 (0:00:03.823) 0:01:11.733 ********* 2025-04-01 19:38:57.838136 | orchestrator | =============================================================================== 2025-04-01 19:38:57.838150 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.86s 2025-04-01 19:38:57.838164 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.82s 2025-04-01 19:38:57.838178 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.30s 2025-04-01 19:38:57.838195 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.88s 2025-04-01 19:38:57.838210 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.24s 2025-04-01 19:38:57.838225 | orchestrator | 2025-04-01 19:38:57.838241 | orchestrator | 2025-04-01 19:38:57.838256 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-01 19:38:57.838271 | orchestrator | 2025-04-01 19:38:57.838287 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-01 19:38:57.838302 | orchestrator | Tuesday 01 April 2025 19:36:11 +0000 (0:00:00.446) 0:00:00.446 ********* 2025-04-01 19:38:57.838317 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:38:57.838334 | orchestrator | 2025-04-01 19:38:57.838349 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-01 19:38:57.838393 | orchestrator | Tuesday 01 April 2025 19:36:13 +0000 (0:00:02.352) 0:00:02.799 ********* 2025-04-01 19:38:57.838409 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838424 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838440 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838455 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838471 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838496 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838511 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838527 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838542 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838558 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838572 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838586 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838600 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-01 19:38:57.838614 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838628 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838647 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838661 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838685 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-01 19:38:57.838700 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838715 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838729 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-01 19:38:57.838743 | orchestrator | 2025-04-01 19:38:57.838757 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-01 19:38:57.838772 | orchestrator | Tuesday 01 April 2025 19:36:18 +0000 (0:00:05.069) 0:00:07.868 ********* 2025-04-01 19:38:57.838786 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:38:57.838806 | orchestrator | 2025-04-01 19:38:57.838821 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-01 19:38:57.838835 | orchestrator | Tuesday 01 April 2025 19:36:21 +0000 (0:00:02.722) 0:00:10.591 ********* 2025-04-01 19:38:57.838853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838908 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.838977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.838992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839029 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839182 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.839225 | orchestrator | 2025-04-01 19:38:57.839240 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-01 19:38:57.839254 | orchestrator | Tuesday 01 April 2025 19:36:27 +0000 (0:00:05.377) 0:00:15.969 ********* 2025-04-01 19:38:57.839275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839330 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:38:57.839345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839392 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839409 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839528 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:38:57.839542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839586 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:38:57.839600 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:38:57.839614 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:38:57.839634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839686 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:38:57.839700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839743 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:38:57.839757 | orchestrator | 2025-04-01 19:38:57.839771 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-01 19:38:57.839785 | orchestrator | Tuesday 01 April 2025 19:36:29 +0000 (0:00:02.524) 0:00:18.493 ********* 2025-04-01 19:38:57.839800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839820 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839835 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839856 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:38:57.839870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.839936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839950 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:38:57.839975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.839990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.840011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840041 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:38:57.840055 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:38:57.840070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.840089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840119 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:38:57.840133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.840154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840190 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:38:57.840205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-01 19:38:57.840219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.840248 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:38:57.840262 | orchestrator | 2025-04-01 19:38:57.840276 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-01 19:38:57.840290 | orchestrator | Tuesday 01 April 2025 19:36:33 +0000 (0:00:03.466) 0:00:21.960 ********* 2025-04-01 19:38:57.840304 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:38:57.840319 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:38:57.840332 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:38:57.840346 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:38:57.840413 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:38:57.840429 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:38:57.840444 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:38:57.840458 | orchestrator | 2025-04-01 19:38:57.840472 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-01 19:38:57.840486 | orchestrator | Tuesday 01 April 2025 19:36:35 +0000 (0:00:02.167) 0:00:24.127 ********* 2025-04-01 19:38:57.840501 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:38:57.840515 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:38:57.840528 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:38:57.840542 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:38:57.840556 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:38:57.840570 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:38:57.840590 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:38:57.840604 | orchestrator | 2025-04-01 19:38:57.840618 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-01 19:38:57.840633 | orchestrator | Tuesday 01 April 2025 19:36:36 +0000 (0:00:01.460) 0:00:25.588 ********* 2025-04-01 19:38:57.840647 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:38:57.840660 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.840674 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.840688 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.840702 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.840716 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.840729 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.840743 | orchestrator | 2025-04-01 19:38:57.840757 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-01 19:38:57.840772 | orchestrator | Tuesday 01 April 2025 19:37:20 +0000 (0:00:43.563) 0:01:09.151 ********* 2025-04-01 19:38:57.840786 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:38:57.840807 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:38:57.840821 | orchestrator | ok: [testbed-manager] 2025-04-01 19:38:57.840835 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:38:57.840849 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:38:57.840863 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:38:57.840877 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:38:57.840891 | orchestrator | 2025-04-01 19:38:57.840906 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-01 19:38:57.840919 | orchestrator | Tuesday 01 April 2025 19:37:25 +0000 (0:00:05.283) 0:01:14.435 ********* 2025-04-01 19:38:57.840931 | orchestrator | ok: [testbed-manager] 2025-04-01 19:38:57.840944 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:38:57.840961 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:38:57.840973 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:38:57.840986 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:38:57.840998 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:38:57.841010 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:38:57.841023 | orchestrator | 2025-04-01 19:38:57.841035 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-01 19:38:57.841048 | orchestrator | Tuesday 01 April 2025 19:37:27 +0000 (0:00:01.935) 0:01:16.370 ********* 2025-04-01 19:38:57.841061 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:38:57.841074 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:38:57.841086 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:38:57.841099 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:38:57.841111 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:38:57.841123 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:38:57.841135 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:38:57.841148 | orchestrator | 2025-04-01 19:38:57.841161 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-01 19:38:57.841173 | orchestrator | Tuesday 01 April 2025 19:37:29 +0000 (0:00:02.077) 0:01:18.447 ********* 2025-04-01 19:38:57.841186 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:38:57.841198 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:38:57.841210 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:38:57.841223 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:38:57.841235 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:38:57.841248 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:38:57.841260 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:38:57.841272 | orchestrator | 2025-04-01 19:38:57.841285 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-01 19:38:57.841298 | orchestrator | Tuesday 01 April 2025 19:37:30 +0000 (0:00:01.109) 0:01:19.557 ********* 2025-04-01 19:38:57.841311 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841396 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.841516 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.841667 | orchestrator | 2025-04-01 19:38:57.841680 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-01 19:38:57.841693 | orchestrator | Tuesday 01 April 2025 19:37:36 +0000 (0:00:06.325) 0:01:25.882 ********* 2025-04-01 19:38:57.841705 | orchestrator | [WARNING]: Skipped 2025-04-01 19:38:57.841718 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-01 19:38:57.841730 | orchestrator | to this access issue: 2025-04-01 19:38:57.841743 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-01 19:38:57.841755 | orchestrator | directory 2025-04-01 19:38:57.841767 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 19:38:57.841780 | orchestrator | 2025-04-01 19:38:57.841798 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-01 19:38:57.841811 | orchestrator | Tuesday 01 April 2025 19:37:38 +0000 (0:00:01.096) 0:01:26.979 ********* 2025-04-01 19:38:57.841823 | orchestrator | [WARNING]: Skipped 2025-04-01 19:38:57.841840 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-01 19:38:57.841853 | orchestrator | to this access issue: 2025-04-01 19:38:57.841865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-01 19:38:57.841878 | orchestrator | directory 2025-04-01 19:38:57.841890 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 19:38:57.841903 | orchestrator | 2025-04-01 19:38:57.841915 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-01 19:38:57.841928 | orchestrator | Tuesday 01 April 2025 19:37:38 +0000 (0:00:00.580) 0:01:27.559 ********* 2025-04-01 19:38:57.841940 | orchestrator | [WARNING]: Skipped 2025-04-01 19:38:57.841953 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-01 19:38:57.841965 | orchestrator | to this access issue: 2025-04-01 19:38:57.841977 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-01 19:38:57.841990 | orchestrator | directory 2025-04-01 19:38:57.842002 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 19:38:57.842015 | orchestrator | 2025-04-01 19:38:57.842057 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-01 19:38:57.842070 | orchestrator | Tuesday 01 April 2025 19:37:39 +0000 (0:00:00.724) 0:01:28.284 ********* 2025-04-01 19:38:57.842083 | orchestrator | [WARNING]: Skipped 2025-04-01 19:38:57.842095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-01 19:38:57.842108 | orchestrator | to this access issue: 2025-04-01 19:38:57.842121 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-01 19:38:57.842133 | orchestrator | directory 2025-04-01 19:38:57.842146 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 19:38:57.842158 | orchestrator | 2025-04-01 19:38:57.842171 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-01 19:38:57.842183 | orchestrator | Tuesday 01 April 2025 19:37:40 +0000 (0:00:00.771) 0:01:29.055 ********* 2025-04-01 19:38:57.842195 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.842208 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.842220 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.842233 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.842245 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.842258 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.842270 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.842282 | orchestrator | 2025-04-01 19:38:57.842295 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-01 19:38:57.842308 | orchestrator | Tuesday 01 April 2025 19:37:46 +0000 (0:00:06.072) 0:01:35.128 ********* 2025-04-01 19:38:57.842320 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842333 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842346 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842398 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842413 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842425 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842438 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-01 19:38:57.842457 | orchestrator | 2025-04-01 19:38:57.842469 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-01 19:38:57.842482 | orchestrator | Tuesday 01 April 2025 19:37:49 +0000 (0:00:03.666) 0:01:38.795 ********* 2025-04-01 19:38:57.842494 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.842507 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.842519 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.842532 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.842545 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.842564 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.842577 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.842589 | orchestrator | 2025-04-01 19:38:57.842602 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-01 19:38:57.842614 | orchestrator | Tuesday 01 April 2025 19:37:53 +0000 (0:00:03.270) 0:01:42.065 ********* 2025-04-01 19:38:57.842627 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842658 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842685 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842702 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842720 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842754 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842784 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842829 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842847 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842874 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842887 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.842900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:38:57.842916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842927 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.842942 | orchestrator | 2025-04-01 19:38:57.842953 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-01 19:38:57.842963 | orchestrator | Tuesday 01 April 2025 19:37:56 +0000 (0:00:03.096) 0:01:45.161 ********* 2025-04-01 19:38:57.842973 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.842984 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.842994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.843004 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.843015 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.843025 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.843035 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-01 19:38:57.843045 | orchestrator | 2025-04-01 19:38:57.843055 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-01 19:38:57.843109 | orchestrator | Tuesday 01 April 2025 19:37:58 +0000 (0:00:02.515) 0:01:47.677 ********* 2025-04-01 19:38:57.843120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843131 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843141 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843151 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843161 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843171 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843182 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-01 19:38:57.843192 | orchestrator | 2025-04-01 19:38:57.843202 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-01 19:38:57.843212 | orchestrator | Tuesday 01 April 2025 19:38:01 +0000 (0:00:02.816) 0:01:50.493 ********* 2025-04-01 19:38:57.843223 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843377 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-01 19:38:57.843451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843462 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:38:57.843540 | orchestrator | 2025-04-01 19:38:57.843551 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-01 19:38:57.843561 | orchestrator | Tuesday 01 April 2025 19:38:05 +0000 (0:00:03.991) 0:01:54.485 ********* 2025-04-01 19:38:57.843571 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.843585 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.843596 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.843606 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.843616 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.843626 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.843636 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.843646 | orchestrator | 2025-04-01 19:38:57.843656 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-01 19:38:57.843666 | orchestrator | Tuesday 01 April 2025 19:38:07 +0000 (0:00:02.043) 0:01:56.529 ********* 2025-04-01 19:38:57.843676 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.843686 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.843696 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.843711 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.843721 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.843731 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.843741 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.843751 | orchestrator | 2025-04-01 19:38:57.843761 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843772 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:01.599) 0:01:58.128 ********* 2025-04-01 19:38:57.843787 | orchestrator | 2025-04-01 19:38:57.843797 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843807 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.065) 0:01:58.194 ********* 2025-04-01 19:38:57.843817 | orchestrator | 2025-04-01 19:38:57.843827 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843837 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.061) 0:01:58.255 ********* 2025-04-01 19:38:57.843847 | orchestrator | 2025-04-01 19:38:57.843858 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843867 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.059) 0:01:58.314 ********* 2025-04-01 19:38:57.843878 | orchestrator | 2025-04-01 19:38:57.843888 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843898 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.261) 0:01:58.575 ********* 2025-04-01 19:38:57.843908 | orchestrator | 2025-04-01 19:38:57.843918 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843928 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.060) 0:01:58.636 ********* 2025-04-01 19:38:57.843938 | orchestrator | 2025-04-01 19:38:57.843949 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-01 19:38:57.843959 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.055) 0:01:58.692 ********* 2025-04-01 19:38:57.843969 | orchestrator | 2025-04-01 19:38:57.843979 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-01 19:38:57.843989 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.080) 0:01:58.773 ********* 2025-04-01 19:38:57.843999 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.844009 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.844019 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.844029 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.844039 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.844049 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.844059 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.844069 | orchestrator | 2025-04-01 19:38:57.844079 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-01 19:38:57.844090 | orchestrator | Tuesday 01 April 2025 19:38:18 +0000 (0:00:08.347) 0:02:07.120 ********* 2025-04-01 19:38:57.844100 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.844110 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.844120 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.844130 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.844140 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.844150 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.844160 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.844170 | orchestrator | 2025-04-01 19:38:57.844180 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-01 19:38:57.844191 | orchestrator | Tuesday 01 April 2025 19:38:43 +0000 (0:00:25.346) 0:02:32.467 ********* 2025-04-01 19:38:57.844201 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:38:57.844211 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:38:57.844221 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:38:57.844231 | orchestrator | ok: [testbed-manager] 2025-04-01 19:38:57.844242 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:38:57.844252 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:38:57.844262 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:38:57.844272 | orchestrator | 2025-04-01 19:38:57.844282 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-01 19:38:57.844292 | orchestrator | Tuesday 01 April 2025 19:38:46 +0000 (0:00:03.067) 0:02:35.534 ********* 2025-04-01 19:38:57.844303 | orchestrator | changed: [testbed-manager] 2025-04-01 19:38:57.844313 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:38:57.844323 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:38:57.844333 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:38:57.844348 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:38:57.844372 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:38:57.844383 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:38:57.844393 | orchestrator | 2025-04-01 19:38:57.844404 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:38:57.844414 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:38:57.844425 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:38:57.844436 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:38:57.844450 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:39:00.914639 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:39:00.914755 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:39:00.914773 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:39:00.914788 | orchestrator | 2025-04-01 19:39:00.914803 | orchestrator | 2025-04-01 19:39:00.914818 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:39:00.914833 | orchestrator | Tuesday 01 April 2025 19:38:56 +0000 (0:00:10.256) 0:02:45.790 ********* 2025-04-01 19:39:00.914848 | orchestrator | =============================================================================== 2025-04-01 19:39:00.914862 | orchestrator | common : Ensure fluentd image is present for label check --------------- 43.56s 2025-04-01 19:39:00.914877 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.35s 2025-04-01 19:39:00.914909 | orchestrator | common : Restart cron container ---------------------------------------- 10.26s 2025-04-01 19:39:00.914924 | orchestrator | common : Restart fluentd container -------------------------------------- 8.35s 2025-04-01 19:39:00.914938 | orchestrator | common : Copying over config.json files for services -------------------- 6.33s 2025-04-01 19:39:00.914952 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 6.07s 2025-04-01 19:39:00.914966 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.38s 2025-04-01 19:39:00.914980 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 5.28s 2025-04-01 19:39:00.914994 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.07s 2025-04-01 19:39:00.915008 | orchestrator | common : Check common containers ---------------------------------------- 3.99s 2025-04-01 19:39:00.915022 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.67s 2025-04-01 19:39:00.915036 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.47s 2025-04-01 19:39:00.915050 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.27s 2025-04-01 19:39:00.915064 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.10s 2025-04-01 19:39:00.915079 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.07s 2025-04-01 19:39:00.915093 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.82s 2025-04-01 19:39:00.915107 | orchestrator | common : include_tasks -------------------------------------------------- 2.72s 2025-04-01 19:39:00.915121 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.52s 2025-04-01 19:39:00.915136 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.52s 2025-04-01 19:39:00.915172 | orchestrator | common : include_tasks -------------------------------------------------- 2.35s 2025-04-01 19:39:00.915189 | orchestrator | 2025-04-01 19:38:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:00.915221 | orchestrator | 2025-04-01 19:39:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:00.919143 | orchestrator | 2025-04-01 19:39:00 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:00.923670 | orchestrator | 2025-04-01 19:39:00 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:00.923700 | orchestrator | 2025-04-01 19:39:00 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:00.925659 | orchestrator | 2025-04-01 19:39:00 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:00.927579 | orchestrator | 2025-04-01 19:39:00 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:03.987774 | orchestrator | 2025-04-01 19:39:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:03.987907 | orchestrator | 2025-04-01 19:39:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:03.988504 | orchestrator | 2025-04-01 19:39:03 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:03.992554 | orchestrator | 2025-04-01 19:39:03 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:03.993576 | orchestrator | 2025-04-01 19:39:03 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:03.995678 | orchestrator | 2025-04-01 19:39:03 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:03.999845 | orchestrator | 2025-04-01 19:39:03 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:07.048425 | orchestrator | 2025-04-01 19:39:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:07.048563 | orchestrator | 2025-04-01 19:39:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:07.049075 | orchestrator | 2025-04-01 19:39:07 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:07.049546 | orchestrator | 2025-04-01 19:39:07 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:07.050547 | orchestrator | 2025-04-01 19:39:07 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:07.051637 | orchestrator | 2025-04-01 19:39:07 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:07.051992 | orchestrator | 2025-04-01 19:39:07 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:10.111006 | orchestrator | 2025-04-01 19:39:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:10.111151 | orchestrator | 2025-04-01 19:39:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:10.115337 | orchestrator | 2025-04-01 19:39:10 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:10.122693 | orchestrator | 2025-04-01 19:39:10 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:10.125044 | orchestrator | 2025-04-01 19:39:10 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:10.126545 | orchestrator | 2025-04-01 19:39:10 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:10.126607 | orchestrator | 2025-04-01 19:39:10 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:13.189416 | orchestrator | 2025-04-01 19:39:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:13.189538 | orchestrator | 2025-04-01 19:39:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:13.190323 | orchestrator | 2025-04-01 19:39:13 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:13.192167 | orchestrator | 2025-04-01 19:39:13 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:13.193655 | orchestrator | 2025-04-01 19:39:13 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:13.195513 | orchestrator | 2025-04-01 19:39:13 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:13.197738 | orchestrator | 2025-04-01 19:39:13 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:16.276627 | orchestrator | 2025-04-01 19:39:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:16.276761 | orchestrator | 2025-04-01 19:39:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:16.278920 | orchestrator | 2025-04-01 19:39:16 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:16.279697 | orchestrator | 2025-04-01 19:39:16 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:16.281162 | orchestrator | 2025-04-01 19:39:16 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:16.282546 | orchestrator | 2025-04-01 19:39:16 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:16.288110 | orchestrator | 2025-04-01 19:39:16 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:19.334365 | orchestrator | 2025-04-01 19:39:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:19.334526 | orchestrator | 2025-04-01 19:39:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:19.335139 | orchestrator | 2025-04-01 19:39:19 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:19.337330 | orchestrator | 2025-04-01 19:39:19 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:19.339104 | orchestrator | 2025-04-01 19:39:19 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:19.341813 | orchestrator | 2025-04-01 19:39:19 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:19.342009 | orchestrator | 2025-04-01 19:39:19 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:19.342412 | orchestrator | 2025-04-01 19:39:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:22.384592 | orchestrator | 2025-04-01 19:39:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:22.386316 | orchestrator | 2025-04-01 19:39:22 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state STARTED 2025-04-01 19:39:22.386349 | orchestrator | 2025-04-01 19:39:22 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:22.386365 | orchestrator | 2025-04-01 19:39:22 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:22.386404 | orchestrator | 2025-04-01 19:39:22 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:22.386426 | orchestrator | 2025-04-01 19:39:22 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:25.417592 | orchestrator | 2025-04-01 19:39:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:25.417735 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:25.422074 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:25.422125 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task 976ad07a-ed43-4261-9e6d-b85417c682ec is in state SUCCESS 2025-04-01 19:39:28.478451 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:28.478562 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:28.478579 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:28.478594 | orchestrator | 2025-04-01 19:39:25 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:28.478611 | orchestrator | 2025-04-01 19:39:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:28.478643 | orchestrator | 2025-04-01 19:39:28 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:28.480019 | orchestrator | 2025-04-01 19:39:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:28.480994 | orchestrator | 2025-04-01 19:39:28 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:28.481807 | orchestrator | 2025-04-01 19:39:28 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:28.482562 | orchestrator | 2025-04-01 19:39:28 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state STARTED 2025-04-01 19:39:28.483537 | orchestrator | 2025-04-01 19:39:28 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:31.536836 | orchestrator | 2025-04-01 19:39:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:31.536973 | orchestrator | 2025-04-01 19:39:31 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:31.538770 | orchestrator | 2025-04-01 19:39:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:31.543146 | orchestrator | 2025-04-01 19:39:31 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:31.544000 | orchestrator | 2025-04-01 19:39:31 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:31.544950 | orchestrator | 2025-04-01 19:39:31 | INFO  | Task 0e9229ae-af97-47bb-a35e-32955e832e3c is in state SUCCESS 2025-04-01 19:39:31.546303 | orchestrator | 2025-04-01 19:39:31.546341 | orchestrator | 2025-04-01 19:39:31.546357 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:39:31.546374 | orchestrator | 2025-04-01 19:39:31.546414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:39:31.546431 | orchestrator | Tuesday 01 April 2025 19:39:04 +0000 (0:00:01.010) 0:00:01.010 ********* 2025-04-01 19:39:31.546447 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:39:31.546465 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:39:31.546480 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:39:31.546494 | orchestrator | 2025-04-01 19:39:31.546508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:39:31.546523 | orchestrator | Tuesday 01 April 2025 19:39:05 +0000 (0:00:00.881) 0:00:01.892 ********* 2025-04-01 19:39:31.546538 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-01 19:39:31.546582 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-01 19:39:31.546597 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-01 19:39:31.546611 | orchestrator | 2025-04-01 19:39:31.546634 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-01 19:39:31.546648 | orchestrator | 2025-04-01 19:39:31.546662 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-01 19:39:31.546676 | orchestrator | Tuesday 01 April 2025 19:39:06 +0000 (0:00:00.629) 0:00:02.522 ********* 2025-04-01 19:39:31.546703 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:39:31.546719 | orchestrator | 2025-04-01 19:39:31.546733 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-01 19:39:31.546748 | orchestrator | Tuesday 01 April 2025 19:39:07 +0000 (0:00:01.399) 0:00:03.922 ********* 2025-04-01 19:39:31.546762 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-01 19:39:31.546776 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-01 19:39:31.546790 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-01 19:39:31.546804 | orchestrator | 2025-04-01 19:39:31.546818 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-01 19:39:31.546832 | orchestrator | Tuesday 01 April 2025 19:39:09 +0000 (0:00:01.485) 0:00:05.407 ********* 2025-04-01 19:39:31.546847 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-01 19:39:31.546861 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-01 19:39:31.546875 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-01 19:39:31.546888 | orchestrator | 2025-04-01 19:39:31.546903 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-01 19:39:31.546916 | orchestrator | Tuesday 01 April 2025 19:39:14 +0000 (0:00:05.503) 0:00:10.911 ********* 2025-04-01 19:39:31.546930 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:39:31.546950 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:39:31.546964 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:39:31.546979 | orchestrator | 2025-04-01 19:39:31.546997 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-01 19:39:31.547011 | orchestrator | Tuesday 01 April 2025 19:39:19 +0000 (0:00:04.550) 0:00:15.462 ********* 2025-04-01 19:39:31.547025 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:39:31.547039 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:39:31.547053 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:39:31.547067 | orchestrator | 2025-04-01 19:39:31.547081 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:39:31.547095 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:39:31.547110 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:39:31.547125 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:39:31.547139 | orchestrator | 2025-04-01 19:39:31.547152 | orchestrator | 2025-04-01 19:39:31.547167 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:39:31.547180 | orchestrator | Tuesday 01 April 2025 19:39:23 +0000 (0:00:04.020) 0:00:19.482 ********* 2025-04-01 19:39:31.547194 | orchestrator | =============================================================================== 2025-04-01 19:39:31.547208 | orchestrator | memcached : Copying over config.json files for services ----------------- 5.51s 2025-04-01 19:39:31.547222 | orchestrator | memcached : Check memcached container ----------------------------------- 4.54s 2025-04-01 19:39:31.547236 | orchestrator | memcached : Restart memcached container --------------------------------- 4.02s 2025-04-01 19:39:31.547250 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.49s 2025-04-01 19:39:31.547271 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.40s 2025-04-01 19:39:31.547285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.88s 2025-04-01 19:39:31.547299 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-04-01 19:39:31.547313 | orchestrator | 2025-04-01 19:39:31.547327 | orchestrator | 2025-04-01 19:39:31.547341 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:39:31.547355 | orchestrator | 2025-04-01 19:39:31.547369 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:39:31.547402 | orchestrator | Tuesday 01 April 2025 19:39:03 +0000 (0:00:00.697) 0:00:00.697 ********* 2025-04-01 19:39:31.547417 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:39:31.547431 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:39:31.547445 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:39:31.547460 | orchestrator | 2025-04-01 19:39:31.547474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:39:31.547501 | orchestrator | Tuesday 01 April 2025 19:39:04 +0000 (0:00:01.051) 0:00:01.749 ********* 2025-04-01 19:39:31.547516 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-01 19:39:31.547530 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-01 19:39:31.547544 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-01 19:39:31.547558 | orchestrator | 2025-04-01 19:39:31.547572 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-01 19:39:31.547586 | orchestrator | 2025-04-01 19:39:31.547600 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-01 19:39:31.547614 | orchestrator | Tuesday 01 April 2025 19:39:05 +0000 (0:00:00.802) 0:00:02.551 ********* 2025-04-01 19:39:31.547628 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:39:31.547642 | orchestrator | 2025-04-01 19:39:31.547656 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-01 19:39:31.547670 | orchestrator | Tuesday 01 April 2025 19:39:06 +0000 (0:00:01.252) 0:00:03.804 ********* 2025-04-01 19:39:31.547687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547808 | orchestrator | 2025-04-01 19:39:31.547823 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-01 19:39:31.547837 | orchestrator | Tuesday 01 April 2025 19:39:08 +0000 (0:00:02.055) 0:00:05.859 ********* 2025-04-01 19:39:31.547851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.547958 | orchestrator | 2025-04-01 19:39:31.547972 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-01 19:39:31.547986 | orchestrator | Tuesday 01 April 2025 19:39:13 +0000 (0:00:05.211) 0:00:11.070 ********* 2025-04-01 19:39:31.548000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548105 | orchestrator | 2025-04-01 19:39:31.548120 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-01 19:39:31.548134 | orchestrator | Tuesday 01 April 2025 19:39:19 +0000 (0:00:06.223) 0:00:17.294 ********* 2025-04-01 19:39:31.548148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:31.548234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-01 19:39:34.581334 | orchestrator | 2025-04-01 19:39:34.581489 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-01 19:39:34.581512 | orchestrator | Tuesday 01 April 2025 19:39:22 +0000 (0:00:03.032) 0:00:20.327 ********* 2025-04-01 19:39:34.581527 | orchestrator | 2025-04-01 19:39:34.581542 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-01 19:39:34.581558 | orchestrator | Tuesday 01 April 2025 19:39:22 +0000 (0:00:00.134) 0:00:20.461 ********* 2025-04-01 19:39:34.581572 | orchestrator | 2025-04-01 19:39:34.581587 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-01 19:39:34.581601 | orchestrator | Tuesday 01 April 2025 19:39:23 +0000 (0:00:00.120) 0:00:20.582 ********* 2025-04-01 19:39:34.581615 | orchestrator | 2025-04-01 19:39:34.581629 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-01 19:39:34.581643 | orchestrator | Tuesday 01 April 2025 19:39:23 +0000 (0:00:00.395) 0:00:20.977 ********* 2025-04-01 19:39:34.581657 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:39:34.581672 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:39:34.581686 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:39:34.581701 | orchestrator | 2025-04-01 19:39:34.581715 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-01 19:39:34.581728 | orchestrator | Tuesday 01 April 2025 19:39:26 +0000 (0:00:03.496) 0:00:24.474 ********* 2025-04-01 19:39:34.581773 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:39:34.581788 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:39:34.581802 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:39:34.581816 | orchestrator | 2025-04-01 19:39:34.581831 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:39:34.581845 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:39:34.581861 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:39:34.581875 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:39:34.581889 | orchestrator | 2025-04-01 19:39:34.581903 | orchestrator | 2025-04-01 19:39:34.581918 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:39:34.581932 | orchestrator | Tuesday 01 April 2025 19:39:30 +0000 (0:00:03.887) 0:00:28.361 ********* 2025-04-01 19:39:34.581946 | orchestrator | =============================================================================== 2025-04-01 19:39:34.581959 | orchestrator | redis : Copying over redis config files --------------------------------- 6.22s 2025-04-01 19:39:34.581973 | orchestrator | redis : Copying over default config.json files -------------------------- 5.21s 2025-04-01 19:39:34.581987 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.89s 2025-04-01 19:39:34.582001 | orchestrator | redis : Restart redis container ----------------------------------------- 3.50s 2025-04-01 19:39:34.582071 | orchestrator | redis : Check redis containers ------------------------------------------ 3.03s 2025-04-01 19:39:34.582089 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.06s 2025-04-01 19:39:34.582103 | orchestrator | redis : include_tasks --------------------------------------------------- 1.25s 2025-04-01 19:39:34.582117 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2025-04-01 19:39:34.582131 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-04-01 19:39:34.582145 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.65s 2025-04-01 19:39:34.582160 | orchestrator | 2025-04-01 19:39:31 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:34.582174 | orchestrator | 2025-04-01 19:39:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:34.582209 | orchestrator | 2025-04-01 19:39:34 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:34.583102 | orchestrator | 2025-04-01 19:39:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:34.584147 | orchestrator | 2025-04-01 19:39:34 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:34.585252 | orchestrator | 2025-04-01 19:39:34 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:34.586613 | orchestrator | 2025-04-01 19:39:34 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:37.622947 | orchestrator | 2025-04-01 19:39:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:37.623117 | orchestrator | 2025-04-01 19:39:37 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:37.624051 | orchestrator | 2025-04-01 19:39:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:37.624173 | orchestrator | 2025-04-01 19:39:37 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:37.625033 | orchestrator | 2025-04-01 19:39:37 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:37.625786 | orchestrator | 2025-04-01 19:39:37 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:40.664162 | orchestrator | 2025-04-01 19:39:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:40.664429 | orchestrator | 2025-04-01 19:39:40 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:40.664524 | orchestrator | 2025-04-01 19:39:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:40.665496 | orchestrator | 2025-04-01 19:39:40 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:40.666110 | orchestrator | 2025-04-01 19:39:40 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:40.667003 | orchestrator | 2025-04-01 19:39:40 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:43.711247 | orchestrator | 2025-04-01 19:39:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:43.711376 | orchestrator | 2025-04-01 19:39:43 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:43.711986 | orchestrator | 2025-04-01 19:39:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:43.712005 | orchestrator | 2025-04-01 19:39:43 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:43.713455 | orchestrator | 2025-04-01 19:39:43 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:43.714756 | orchestrator | 2025-04-01 19:39:43 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:46.755339 | orchestrator | 2025-04-01 19:39:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:46.755550 | orchestrator | 2025-04-01 19:39:46 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:46.757240 | orchestrator | 2025-04-01 19:39:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:46.758170 | orchestrator | 2025-04-01 19:39:46 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:46.758223 | orchestrator | 2025-04-01 19:39:46 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:46.760326 | orchestrator | 2025-04-01 19:39:46 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:46.760465 | orchestrator | 2025-04-01 19:39:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:49.808328 | orchestrator | 2025-04-01 19:39:49 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:49.808629 | orchestrator | 2025-04-01 19:39:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:49.809525 | orchestrator | 2025-04-01 19:39:49 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:49.811972 | orchestrator | 2025-04-01 19:39:49 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:49.813635 | orchestrator | 2025-04-01 19:39:49 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:52.851994 | orchestrator | 2025-04-01 19:39:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:52.852131 | orchestrator | 2025-04-01 19:39:52 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:52.852605 | orchestrator | 2025-04-01 19:39:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:52.853506 | orchestrator | 2025-04-01 19:39:52 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:52.854095 | orchestrator | 2025-04-01 19:39:52 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:52.857988 | orchestrator | 2025-04-01 19:39:52 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:55.911642 | orchestrator | 2025-04-01 19:39:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:55.911774 | orchestrator | 2025-04-01 19:39:55 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:55.913552 | orchestrator | 2025-04-01 19:39:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:55.913590 | orchestrator | 2025-04-01 19:39:55 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:55.913941 | orchestrator | 2025-04-01 19:39:55 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:55.914980 | orchestrator | 2025-04-01 19:39:55 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:39:55.915525 | orchestrator | 2025-04-01 19:39:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:39:58.962081 | orchestrator | 2025-04-01 19:39:58 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:39:58.963080 | orchestrator | 2025-04-01 19:39:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:39:58.964292 | orchestrator | 2025-04-01 19:39:58 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:39:58.966178 | orchestrator | 2025-04-01 19:39:58 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:39:58.967366 | orchestrator | 2025-04-01 19:39:58 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:02.032587 | orchestrator | 2025-04-01 19:39:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:02.032721 | orchestrator | 2025-04-01 19:40:02 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:02.034116 | orchestrator | 2025-04-01 19:40:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:02.039018 | orchestrator | 2025-04-01 19:40:02 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:02.039993 | orchestrator | 2025-04-01 19:40:02 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:02.040055 | orchestrator | 2025-04-01 19:40:02 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:02.040130 | orchestrator | 2025-04-01 19:40:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:05.095597 | orchestrator | 2025-04-01 19:40:05 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:05.096295 | orchestrator | 2025-04-01 19:40:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:05.097645 | orchestrator | 2025-04-01 19:40:05 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:05.099758 | orchestrator | 2025-04-01 19:40:05 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:05.101791 | orchestrator | 2025-04-01 19:40:05 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:08.148020 | orchestrator | 2025-04-01 19:40:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:08.148147 | orchestrator | 2025-04-01 19:40:08 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:08.149541 | orchestrator | 2025-04-01 19:40:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:08.150263 | orchestrator | 2025-04-01 19:40:08 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:08.151778 | orchestrator | 2025-04-01 19:40:08 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:08.153551 | orchestrator | 2025-04-01 19:40:08 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:11.189289 | orchestrator | 2025-04-01 19:40:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:11.189446 | orchestrator | 2025-04-01 19:40:11 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:11.191871 | orchestrator | 2025-04-01 19:40:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:11.192841 | orchestrator | 2025-04-01 19:40:11 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:11.194729 | orchestrator | 2025-04-01 19:40:11 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:11.196273 | orchestrator | 2025-04-01 19:40:11 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:11.196755 | orchestrator | 2025-04-01 19:40:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:14.248385 | orchestrator | 2025-04-01 19:40:14 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:14.251611 | orchestrator | 2025-04-01 19:40:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:14.255273 | orchestrator | 2025-04-01 19:40:14 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:17.297728 | orchestrator | 2025-04-01 19:40:14 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:17.297852 | orchestrator | 2025-04-01 19:40:14 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:17.297873 | orchestrator | 2025-04-01 19:40:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:17.297908 | orchestrator | 2025-04-01 19:40:17 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:17.302169 | orchestrator | 2025-04-01 19:40:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:17.302577 | orchestrator | 2025-04-01 19:40:17 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:17.303665 | orchestrator | 2025-04-01 19:40:17 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:17.307055 | orchestrator | 2025-04-01 19:40:17 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:20.349260 | orchestrator | 2025-04-01 19:40:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:20.349464 | orchestrator | 2025-04-01 19:40:20 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:20.349923 | orchestrator | 2025-04-01 19:40:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:20.352095 | orchestrator | 2025-04-01 19:40:20 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:20.356479 | orchestrator | 2025-04-01 19:40:20 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:20.358682 | orchestrator | 2025-04-01 19:40:20 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:23.402303 | orchestrator | 2025-04-01 19:40:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:23.402492 | orchestrator | 2025-04-01 19:40:23 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:23.403676 | orchestrator | 2025-04-01 19:40:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:23.404592 | orchestrator | 2025-04-01 19:40:23 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:23.405544 | orchestrator | 2025-04-01 19:40:23 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:23.407389 | orchestrator | 2025-04-01 19:40:23 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:26.451748 | orchestrator | 2025-04-01 19:40:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:26.451910 | orchestrator | 2025-04-01 19:40:26 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:26.451982 | orchestrator | 2025-04-01 19:40:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:26.452006 | orchestrator | 2025-04-01 19:40:26 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:26.452999 | orchestrator | 2025-04-01 19:40:26 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:26.454288 | orchestrator | 2025-04-01 19:40:26 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state STARTED 2025-04-01 19:40:29.520898 | orchestrator | 2025-04-01 19:40:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:29.521043 | orchestrator | 2025-04-01 19:40:29 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:29.522188 | orchestrator | 2025-04-01 19:40:29 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:29.523043 | orchestrator | 2025-04-01 19:40:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:29.524250 | orchestrator | 2025-04-01 19:40:29 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:29.525649 | orchestrator | 2025-04-01 19:40:29 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:29.528035 | orchestrator | 2025-04-01 19:40:29.529596 | orchestrator | 2025-04-01 19:40:29.529649 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:40:29.529684 | orchestrator | 2025-04-01 19:40:29.529700 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:40:29.529714 | orchestrator | Tuesday 01 April 2025 19:39:03 +0000 (0:00:01.145) 0:00:01.145 ********* 2025-04-01 19:40:29.529729 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:40:29.529750 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:40:29.529764 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:40:29.529778 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:40:29.529792 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:40:29.529806 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:40:29.529820 | orchestrator | 2025-04-01 19:40:29.529834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:40:29.529848 | orchestrator | Tuesday 01 April 2025 19:39:05 +0000 (0:00:01.675) 0:00:02.820 ********* 2025-04-01 19:40:29.529862 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-01 19:40:29.529877 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-01 19:40:29.529892 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-01 19:40:29.529906 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-01 19:40:29.529943 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-01 19:40:29.529963 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-01 19:40:29.529978 | orchestrator | 2025-04-01 19:40:29.529992 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-01 19:40:29.530006 | orchestrator | 2025-04-01 19:40:29.530071 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-01 19:40:29.530086 | orchestrator | Tuesday 01 April 2025 19:39:07 +0000 (0:00:01.759) 0:00:04.579 ********* 2025-04-01 19:40:29.530102 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:40:29.530117 | orchestrator | 2025-04-01 19:40:29.530132 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-01 19:40:29.530146 | orchestrator | Tuesday 01 April 2025 19:39:10 +0000 (0:00:02.933) 0:00:07.512 ********* 2025-04-01 19:40:29.530162 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-01 19:40:29.530177 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-01 19:40:29.530193 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-01 19:40:29.530209 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-01 19:40:29.530225 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-01 19:40:29.530300 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-01 19:40:29.530369 | orchestrator | 2025-04-01 19:40:29.530386 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-01 19:40:29.530402 | orchestrator | Tuesday 01 April 2025 19:39:13 +0000 (0:00:03.953) 0:00:11.466 ********* 2025-04-01 19:40:29.530461 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-01 19:40:29.530479 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-01 19:40:29.530495 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-01 19:40:29.530516 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-01 19:40:29.530539 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-01 19:40:29.530562 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-01 19:40:29.530584 | orchestrator | 2025-04-01 19:40:29.530607 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-01 19:40:29.530640 | orchestrator | Tuesday 01 April 2025 19:39:18 +0000 (0:00:04.570) 0:00:16.037 ********* 2025-04-01 19:40:29.530664 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-01 19:40:29.530682 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:40:29.530697 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-01 19:40:29.530711 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:40:29.530725 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-01 19:40:29.530739 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:40:29.530753 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-01 19:40:29.530767 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:40:29.530781 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-01 19:40:29.530795 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:40:29.530809 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-01 19:40:29.530823 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:40:29.530837 | orchestrator | 2025-04-01 19:40:29.530851 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-01 19:40:29.530865 | orchestrator | Tuesday 01 April 2025 19:39:21 +0000 (0:00:02.766) 0:00:18.803 ********* 2025-04-01 19:40:29.530879 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:40:29.530893 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:40:29.530907 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:40:29.530934 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:40:29.530948 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:40:29.530962 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:40:29.530976 | orchestrator | 2025-04-01 19:40:29.530990 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-01 19:40:29.531004 | orchestrator | Tuesday 01 April 2025 19:39:22 +0000 (0:00:00.883) 0:00:19.687 ********* 2025-04-01 19:40:29.531036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531264 | orchestrator | 2025-04-01 19:40:29.531279 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-01 19:40:29.531301 | orchestrator | Tuesday 01 April 2025 19:39:24 +0000 (0:00:02.765) 0:00:22.452 ********* 2025-04-01 19:40:29.531324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.531645 | orchestrator | 2025-04-01 19:40:29.531659 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-01 19:40:29.531674 | orchestrator | Tuesday 01 April 2025 19:39:28 +0000 (0:00:03.133) 0:00:25.585 ********* 2025-04-01 19:40:29.531688 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:40:29.531702 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:40:29.531716 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:40:29.531734 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:40:29.531757 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:40:29.531781 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:40:29.531803 | orchestrator | 2025-04-01 19:40:29.531828 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-01 19:40:29.531854 | orchestrator | Tuesday 01 April 2025 19:39:31 +0000 (0:00:03.262) 0:00:28.847 ********* 2025-04-01 19:40:29.531877 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:40:29.531895 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:40:29.531909 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:40:29.531923 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:40:29.531937 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:40:29.531951 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:40:29.531965 | orchestrator | 2025-04-01 19:40:29.531979 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-01 19:40:29.531993 | orchestrator | Tuesday 01 April 2025 19:39:33 +0000 (0:00:02.299) 0:00:31.146 ********* 2025-04-01 19:40:29.532010 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:40:29.532034 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:40:29.532057 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:40:29.532080 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:40:29.532105 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:40:29.532140 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:40:29.532164 | orchestrator | 2025-04-01 19:40:29.532187 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-01 19:40:29.532212 | orchestrator | Tuesday 01 April 2025 19:39:36 +0000 (0:00:02.575) 0:00:33.722 ********* 2025-04-01 19:40:29.532236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-01 19:40:29.532708 | orchestrator | 2025-04-01 19:40:29.532728 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-01 19:40:29.532750 | orchestrator | Tuesday 01 April 2025 19:39:39 +0000 (0:00:03.224) 0:00:36.947 ********* 2025-04-01 19:40:29.532771 | orchestrator | 2025-04-01 19:40:29.532786 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-01 19:40:29.532798 | orchestrator | Tuesday 01 April 2025 19:39:39 +0000 (0:00:00.280) 0:00:37.227 ********* 2025-04-01 19:40:29.532816 | orchestrator | 2025-04-01 19:40:29.532837 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-01 19:40:29.532857 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:00.402) 0:00:37.630 ********* 2025-04-01 19:40:29.532878 | orchestrator | 2025-04-01 19:40:29.532898 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-01 19:40:29.532919 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:00.125) 0:00:37.756 ********* 2025-04-01 19:40:29.532940 | orchestrator | 2025-04-01 19:40:29.532967 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-01 19:40:29.532989 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:00.326) 0:00:38.082 ********* 2025-04-01 19:40:29.533010 | orchestrator | 2025-04-01 19:40:29.533030 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-01 19:40:29.533051 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:00.141) 0:00:38.223 ********* 2025-04-01 19:40:29.533071 | orchestrator | 2025-04-01 19:40:29.533092 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-01 19:40:29.533113 | orchestrator | Tuesday 01 April 2025 19:39:41 +0000 (0:00:00.593) 0:00:38.817 ********* 2025-04-01 19:40:29.533133 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:40:29.533153 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:40:29.533174 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:40:29.533195 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:40:29.533216 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:40:29.533236 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:40:29.533256 | orchestrator | 2025-04-01 19:40:29.533277 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-01 19:40:29.533298 | orchestrator | Tuesday 01 April 2025 19:39:50 +0000 (0:00:09.147) 0:00:47.964 ********* 2025-04-01 19:40:29.533327 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:40:29.533348 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:40:29.533369 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:40:29.533389 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:40:29.533432 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:40:29.533455 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:40:29.533477 | orchestrator | 2025-04-01 19:40:29.533498 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-01 19:40:29.533519 | orchestrator | Tuesday 01 April 2025 19:39:52 +0000 (0:00:01.866) 0:00:49.831 ********* 2025-04-01 19:40:29.533549 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:40:29.533569 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:40:29.533590 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:40:29.533611 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:40:29.533633 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:40:29.533656 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:40:29.533678 | orchestrator | 2025-04-01 19:40:29.533700 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-01 19:40:29.533722 | orchestrator | Tuesday 01 April 2025 19:40:02 +0000 (0:00:10.063) 0:00:59.894 ********* 2025-04-01 19:40:29.533743 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-01 19:40:29.533764 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-01 19:40:29.533785 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-01 19:40:29.533806 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-01 19:40:29.533826 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-01 19:40:29.533853 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-01 19:40:29.533874 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-01 19:40:29.533890 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-01 19:40:29.533902 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-01 19:40:29.533915 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-01 19:40:29.533927 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-01 19:40:29.533940 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-01 19:40:29.533952 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-01 19:40:29.533965 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-01 19:40:29.533977 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-01 19:40:29.533989 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-01 19:40:29.534002 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-01 19:40:29.534014 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-01 19:40:29.534073 | orchestrator | 2025-04-01 19:40:29.534087 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-01 19:40:29.534099 | orchestrator | Tuesday 01 April 2025 19:40:11 +0000 (0:00:09.005) 0:01:08.900 ********* 2025-04-01 19:40:29.534112 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-01 19:40:29.534125 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:40:29.534139 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-01 19:40:29.534151 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:40:29.534164 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-01 19:40:29.534176 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:40:29.534189 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-01 19:40:29.534215 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-01 19:40:29.534228 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-01 19:40:29.534241 | orchestrator | 2025-04-01 19:40:29.534253 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-01 19:40:29.534272 | orchestrator | Tuesday 01 April 2025 19:40:14 +0000 (0:00:02.919) 0:01:11.819 ********* 2025-04-01 19:40:29.534285 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-01 19:40:29.534297 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:40:29.534310 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-01 19:40:29.534322 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:40:29.534335 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-01 19:40:29.534347 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:40:29.534360 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-01 19:40:29.534381 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-01 19:40:32.587464 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-01 19:40:32.587583 | orchestrator | 2025-04-01 19:40:32.587605 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-01 19:40:32.587621 | orchestrator | Tuesday 01 April 2025 19:40:18 +0000 (0:00:04.107) 0:01:15.927 ********* 2025-04-01 19:40:32.587635 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:40:32.587651 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:40:32.587665 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:40:32.587680 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:40:32.587694 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:40:32.587708 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:40:32.587723 | orchestrator | 2025-04-01 19:40:32.587737 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:40:32.587752 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:40:32.587768 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:40:32.587782 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:40:32.587796 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:40:32.587810 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:40:32.587844 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:40:32.587858 | orchestrator | 2025-04-01 19:40:32.587872 | orchestrator | 2025-04-01 19:40:32.587886 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:40:32.587900 | orchestrator | Tuesday 01 April 2025 19:40:27 +0000 (0:00:08.815) 0:01:24.742 ********* 2025-04-01 19:40:32.587915 | orchestrator | =============================================================================== 2025-04-01 19:40:32.587928 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.88s 2025-04-01 19:40:32.587943 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.15s 2025-04-01 19:40:32.587959 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.01s 2025-04-01 19:40:32.587974 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 4.57s 2025-04-01 19:40:32.587989 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.11s 2025-04-01 19:40:32.588030 | orchestrator | module-load : Load modules ---------------------------------------------- 3.95s 2025-04-01 19:40:32.588047 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.26s 2025-04-01 19:40:32.588063 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.22s 2025-04-01 19:40:32.588078 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.13s 2025-04-01 19:40:32.588093 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.93s 2025-04-01 19:40:32.588113 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.92s 2025-04-01 19:40:32.588129 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.77s 2025-04-01 19:40:32.588143 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.77s 2025-04-01 19:40:32.588159 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.58s 2025-04-01 19:40:32.588174 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.30s 2025-04-01 19:40:32.588190 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.87s 2025-04-01 19:40:32.588206 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.87s 2025-04-01 19:40:32.588221 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.76s 2025-04-01 19:40:32.588237 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.68s 2025-04-01 19:40:32.588252 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.88s 2025-04-01 19:40:32.588267 | orchestrator | 2025-04-01 19:40:29 | INFO  | Task 000cb1de-f387-4526-81cc-6e4171870fdc is in state SUCCESS 2025-04-01 19:40:32.588283 | orchestrator | 2025-04-01 19:40:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:32.588315 | orchestrator | 2025-04-01 19:40:32 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:32.588564 | orchestrator | 2025-04-01 19:40:32 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:32.589694 | orchestrator | 2025-04-01 19:40:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:32.590880 | orchestrator | 2025-04-01 19:40:32 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:32.591631 | orchestrator | 2025-04-01 19:40:32 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:35.637388 | orchestrator | 2025-04-01 19:40:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:35.637563 | orchestrator | 2025-04-01 19:40:35 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:35.638136 | orchestrator | 2025-04-01 19:40:35 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:35.638918 | orchestrator | 2025-04-01 19:40:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:35.639968 | orchestrator | 2025-04-01 19:40:35 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:35.641039 | orchestrator | 2025-04-01 19:40:35 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:35.641333 | orchestrator | 2025-04-01 19:40:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:38.679180 | orchestrator | 2025-04-01 19:40:38 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:38.680654 | orchestrator | 2025-04-01 19:40:38 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:38.683004 | orchestrator | 2025-04-01 19:40:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:38.684044 | orchestrator | 2025-04-01 19:40:38 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:38.685096 | orchestrator | 2025-04-01 19:40:38 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:38.685208 | orchestrator | 2025-04-01 19:40:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:41.727563 | orchestrator | 2025-04-01 19:40:41 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:41.728199 | orchestrator | 2025-04-01 19:40:41 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:41.730111 | orchestrator | 2025-04-01 19:40:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:41.733202 | orchestrator | 2025-04-01 19:40:41 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:41.734533 | orchestrator | 2025-04-01 19:40:41 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:41.734792 | orchestrator | 2025-04-01 19:40:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:44.783977 | orchestrator | 2025-04-01 19:40:44 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:44.784371 | orchestrator | 2025-04-01 19:40:44 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:44.784451 | orchestrator | 2025-04-01 19:40:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:44.787878 | orchestrator | 2025-04-01 19:40:44 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:47.841017 | orchestrator | 2025-04-01 19:40:44 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:47.841133 | orchestrator | 2025-04-01 19:40:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:47.841170 | orchestrator | 2025-04-01 19:40:47 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:47.843078 | orchestrator | 2025-04-01 19:40:47 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:47.846695 | orchestrator | 2025-04-01 19:40:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:47.850591 | orchestrator | 2025-04-01 19:40:47 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:47.852622 | orchestrator | 2025-04-01 19:40:47 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:47.852922 | orchestrator | 2025-04-01 19:40:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:50.907298 | orchestrator | 2025-04-01 19:40:50 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:50.908457 | orchestrator | 2025-04-01 19:40:50 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:50.909738 | orchestrator | 2025-04-01 19:40:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:50.912121 | orchestrator | 2025-04-01 19:40:50 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:50.915047 | orchestrator | 2025-04-01 19:40:50 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:53.961278 | orchestrator | 2025-04-01 19:40:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:53.961414 | orchestrator | 2025-04-01 19:40:53 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:53.962557 | orchestrator | 2025-04-01 19:40:53 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:53.964839 | orchestrator | 2025-04-01 19:40:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:53.968801 | orchestrator | 2025-04-01 19:40:53 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:53.969252 | orchestrator | 2025-04-01 19:40:53 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:40:57.020690 | orchestrator | 2025-04-01 19:40:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:40:57.020831 | orchestrator | 2025-04-01 19:40:57 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:40:57.021892 | orchestrator | 2025-04-01 19:40:57 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:40:57.021923 | orchestrator | 2025-04-01 19:40:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:40:57.021938 | orchestrator | 2025-04-01 19:40:57 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:40:57.021959 | orchestrator | 2025-04-01 19:40:57 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:00.074577 | orchestrator | 2025-04-01 19:40:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:00.074716 | orchestrator | 2025-04-01 19:41:00 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:00.075951 | orchestrator | 2025-04-01 19:41:00 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:00.076933 | orchestrator | 2025-04-01 19:41:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:00.078898 | orchestrator | 2025-04-01 19:41:00 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:00.080088 | orchestrator | 2025-04-01 19:41:00 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:00.080316 | orchestrator | 2025-04-01 19:41:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:03.139202 | orchestrator | 2025-04-01 19:41:03 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:03.142217 | orchestrator | 2025-04-01 19:41:03 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:06.196151 | orchestrator | 2025-04-01 19:41:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:06.196269 | orchestrator | 2025-04-01 19:41:03 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:06.196288 | orchestrator | 2025-04-01 19:41:03 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:06.196303 | orchestrator | 2025-04-01 19:41:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:06.196335 | orchestrator | 2025-04-01 19:41:06 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:06.196793 | orchestrator | 2025-04-01 19:41:06 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:06.198118 | orchestrator | 2025-04-01 19:41:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:06.198785 | orchestrator | 2025-04-01 19:41:06 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:06.201681 | orchestrator | 2025-04-01 19:41:06 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:09.253311 | orchestrator | 2025-04-01 19:41:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:09.253519 | orchestrator | 2025-04-01 19:41:09 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:09.254506 | orchestrator | 2025-04-01 19:41:09 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:09.256180 | orchestrator | 2025-04-01 19:41:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:09.258615 | orchestrator | 2025-04-01 19:41:09 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:09.260876 | orchestrator | 2025-04-01 19:41:09 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:12.313359 | orchestrator | 2025-04-01 19:41:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:12.313556 | orchestrator | 2025-04-01 19:41:12 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:12.314328 | orchestrator | 2025-04-01 19:41:12 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:12.314358 | orchestrator | 2025-04-01 19:41:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:12.314372 | orchestrator | 2025-04-01 19:41:12 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:12.314393 | orchestrator | 2025-04-01 19:41:12 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:15.362383 | orchestrator | 2025-04-01 19:41:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:15.362557 | orchestrator | 2025-04-01 19:41:15 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:15.363336 | orchestrator | 2025-04-01 19:41:15 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:15.366205 | orchestrator | 2025-04-01 19:41:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:15.367800 | orchestrator | 2025-04-01 19:41:15 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:15.369763 | orchestrator | 2025-04-01 19:41:15 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:18.410989 | orchestrator | 2025-04-01 19:41:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:18.411121 | orchestrator | 2025-04-01 19:41:18 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:18.411797 | orchestrator | 2025-04-01 19:41:18 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:18.413289 | orchestrator | 2025-04-01 19:41:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:18.415236 | orchestrator | 2025-04-01 19:41:18 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:18.418166 | orchestrator | 2025-04-01 19:41:18 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:21.464152 | orchestrator | 2025-04-01 19:41:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:21.464278 | orchestrator | 2025-04-01 19:41:21 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:21.465066 | orchestrator | 2025-04-01 19:41:21 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:21.468698 | orchestrator | 2025-04-01 19:41:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:21.471655 | orchestrator | 2025-04-01 19:41:21 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:21.473579 | orchestrator | 2025-04-01 19:41:21 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:24.528556 | orchestrator | 2025-04-01 19:41:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:24.528693 | orchestrator | 2025-04-01 19:41:24 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:24.529396 | orchestrator | 2025-04-01 19:41:24 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:24.529430 | orchestrator | 2025-04-01 19:41:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:24.529950 | orchestrator | 2025-04-01 19:41:24 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:24.530669 | orchestrator | 2025-04-01 19:41:24 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:27.571162 | orchestrator | 2025-04-01 19:41:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:27.571287 | orchestrator | 2025-04-01 19:41:27 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:27.575200 | orchestrator | 2025-04-01 19:41:27 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:30.617017 | orchestrator | 2025-04-01 19:41:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:30.617134 | orchestrator | 2025-04-01 19:41:27 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:30.617151 | orchestrator | 2025-04-01 19:41:27 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:30.617167 | orchestrator | 2025-04-01 19:41:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:30.617200 | orchestrator | 2025-04-01 19:41:30 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:30.617817 | orchestrator | 2025-04-01 19:41:30 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:30.617852 | orchestrator | 2025-04-01 19:41:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:30.619435 | orchestrator | 2025-04-01 19:41:30 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:30.620308 | orchestrator | 2025-04-01 19:41:30 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:33.658670 | orchestrator | 2025-04-01 19:41:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:33.658812 | orchestrator | 2025-04-01 19:41:33 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:33.660554 | orchestrator | 2025-04-01 19:41:33 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:33.660589 | orchestrator | 2025-04-01 19:41:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:33.661380 | orchestrator | 2025-04-01 19:41:33 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:33.663397 | orchestrator | 2025-04-01 19:41:33 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:33.663558 | orchestrator | 2025-04-01 19:41:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:36.701531 | orchestrator | 2025-04-01 19:41:36 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:36.701809 | orchestrator | 2025-04-01 19:41:36 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:36.702832 | orchestrator | 2025-04-01 19:41:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:36.703592 | orchestrator | 2025-04-01 19:41:36 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:36.704365 | orchestrator | 2025-04-01 19:41:36 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:39.755176 | orchestrator | 2025-04-01 19:41:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:39.755333 | orchestrator | 2025-04-01 19:41:39 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:39.756111 | orchestrator | 2025-04-01 19:41:39 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:39.756142 | orchestrator | 2025-04-01 19:41:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:39.758613 | orchestrator | 2025-04-01 19:41:39 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:39.761098 | orchestrator | 2025-04-01 19:41:39 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:42.811194 | orchestrator | 2025-04-01 19:41:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:42.811374 | orchestrator | 2025-04-01 19:41:42 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state STARTED 2025-04-01 19:41:42.813507 | orchestrator | 2025-04-01 19:41:42 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:42.816836 | orchestrator | 2025-04-01 19:41:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:42.817279 | orchestrator | 2025-04-01 19:41:42 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:42.818120 | orchestrator | 2025-04-01 19:41:42 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:45.868331 | orchestrator | 2025-04-01 19:41:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:45.868513 | orchestrator | 2025-04-01 19:41:45.869018 | orchestrator | 2025-04-01 19:41:45 | INFO  | Task d610e2ee-74bf-4a20-8439-9d1765d99617 is in state SUCCESS 2025-04-01 19:41:45.869058 | orchestrator | 2025-04-01 19:41:45.869073 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-01 19:41:45.869088 | orchestrator | 2025-04-01 19:41:45.869103 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-01 19:41:45.869117 | orchestrator | Tuesday 01 April 2025 19:39:29 +0000 (0:00:00.517) 0:00:00.517 ********* 2025-04-01 19:41:45.869131 | orchestrator | ok: [localhost] => { 2025-04-01 19:41:45.869149 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-01 19:41:45.869163 | orchestrator | } 2025-04-01 19:41:45.869178 | orchestrator | 2025-04-01 19:41:45.869192 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-01 19:41:45.869206 | orchestrator | Tuesday 01 April 2025 19:39:29 +0000 (0:00:00.066) 0:00:00.584 ********* 2025-04-01 19:41:45.869221 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-01 19:41:45.869237 | orchestrator | ...ignoring 2025-04-01 19:41:45.869251 | orchestrator | 2025-04-01 19:41:45.869265 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-01 19:41:45.869280 | orchestrator | Tuesday 01 April 2025 19:39:32 +0000 (0:00:02.660) 0:00:03.245 ********* 2025-04-01 19:41:45.869294 | orchestrator | skipping: [localhost] 2025-04-01 19:41:45.869308 | orchestrator | 2025-04-01 19:41:45.869322 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-01 19:41:45.869366 | orchestrator | Tuesday 01 April 2025 19:39:32 +0000 (0:00:00.082) 0:00:03.327 ********* 2025-04-01 19:41:45.869380 | orchestrator | ok: [localhost] 2025-04-01 19:41:45.869395 | orchestrator | 2025-04-01 19:41:45.869409 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:41:45.869423 | orchestrator | 2025-04-01 19:41:45.869437 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:41:45.869479 | orchestrator | Tuesday 01 April 2025 19:39:32 +0000 (0:00:00.245) 0:00:03.572 ********* 2025-04-01 19:41:45.869494 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:41:45.869508 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:41:45.869523 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:41:45.869537 | orchestrator | 2025-04-01 19:41:45.869551 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:41:45.869565 | orchestrator | Tuesday 01 April 2025 19:39:33 +0000 (0:00:00.638) 0:00:04.211 ********* 2025-04-01 19:41:45.869579 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-01 19:41:45.869593 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-01 19:41:45.869608 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-01 19:41:45.869622 | orchestrator | 2025-04-01 19:41:45.869639 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-01 19:41:45.869654 | orchestrator | 2025-04-01 19:41:45.869670 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-01 19:41:45.869686 | orchestrator | Tuesday 01 April 2025 19:39:33 +0000 (0:00:00.486) 0:00:04.697 ********* 2025-04-01 19:41:45.869702 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:41:45.869765 | orchestrator | 2025-04-01 19:41:45.869781 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-01 19:41:45.869797 | orchestrator | Tuesday 01 April 2025 19:39:35 +0000 (0:00:01.768) 0:00:06.465 ********* 2025-04-01 19:41:45.869813 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:41:45.869829 | orchestrator | 2025-04-01 19:41:45.869844 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-01 19:41:45.869860 | orchestrator | Tuesday 01 April 2025 19:39:37 +0000 (0:00:01.687) 0:00:08.153 ********* 2025-04-01 19:41:45.869875 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.869892 | orchestrator | 2025-04-01 19:41:45.869908 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-01 19:41:45.869942 | orchestrator | Tuesday 01 April 2025 19:39:38 +0000 (0:00:00.714) 0:00:08.867 ********* 2025-04-01 19:41:45.869959 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.869975 | orchestrator | 2025-04-01 19:41:45.869989 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-01 19:41:45.870004 | orchestrator | Tuesday 01 April 2025 19:39:39 +0000 (0:00:01.428) 0:00:10.295 ********* 2025-04-01 19:41:45.870065 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.870081 | orchestrator | 2025-04-01 19:41:45.870095 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-01 19:41:45.870109 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:00.694) 0:00:10.989 ********* 2025-04-01 19:41:45.870123 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.870138 | orchestrator | 2025-04-01 19:41:45.870152 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-01 19:41:45.870166 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:00.449) 0:00:11.439 ********* 2025-04-01 19:41:45.870265 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:41:45.870282 | orchestrator | 2025-04-01 19:41:45.870296 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-01 19:41:45.870311 | orchestrator | Tuesday 01 April 2025 19:39:41 +0000 (0:00:01.296) 0:00:12.735 ********* 2025-04-01 19:41:45.870325 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:41:45.870350 | orchestrator | 2025-04-01 19:41:45.870365 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-01 19:41:45.870379 | orchestrator | Tuesday 01 April 2025 19:39:43 +0000 (0:00:01.942) 0:00:14.677 ********* 2025-04-01 19:41:45.870393 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.870408 | orchestrator | 2025-04-01 19:41:45.870422 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-01 19:41:45.870436 | orchestrator | Tuesday 01 April 2025 19:39:44 +0000 (0:00:01.056) 0:00:15.733 ********* 2025-04-01 19:41:45.870472 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.870488 | orchestrator | 2025-04-01 19:41:45.870510 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-01 19:41:45.870526 | orchestrator | Tuesday 01 April 2025 19:39:45 +0000 (0:00:00.788) 0:00:16.522 ********* 2025-04-01 19:41:45.870542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.870561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.870577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.870599 | orchestrator | 2025-04-01 19:41:45.870614 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-01 19:41:45.870628 | orchestrator | Tuesday 01 April 2025 19:39:47 +0000 (0:00:01.394) 0:00:17.917 ********* 2025-04-01 19:41:45.870655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.870671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.870687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.870702 | orchestrator | 2025-04-01 19:41:45.870717 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-01 19:41:45.870731 | orchestrator | Tuesday 01 April 2025 19:39:49 +0000 (0:00:02.355) 0:00:20.272 ********* 2025-04-01 19:41:45.870745 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-01 19:41:45.870760 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-01 19:41:45.870790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-01 19:41:45.870805 | orchestrator | 2025-04-01 19:41:45.870819 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-01 19:41:45.870841 | orchestrator | Tuesday 01 April 2025 19:39:51 +0000 (0:00:02.321) 0:00:22.594 ********* 2025-04-01 19:41:45.870855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-01 19:41:45.870870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-01 19:41:45.870888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-01 19:41:45.870903 | orchestrator | 2025-04-01 19:41:45.870917 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-01 19:41:45.870931 | orchestrator | Tuesday 01 April 2025 19:39:56 +0000 (0:00:04.521) 0:00:27.115 ********* 2025-04-01 19:41:45.870945 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-01 19:41:45.870959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-01 19:41:45.870973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-01 19:41:45.870987 | orchestrator | 2025-04-01 19:41:45.871008 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-01 19:41:45.871022 | orchestrator | Tuesday 01 April 2025 19:39:57 +0000 (0:00:01.637) 0:00:28.753 ********* 2025-04-01 19:41:45.871037 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-01 19:41:45.871051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-01 19:41:45.871065 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-01 19:41:45.871080 | orchestrator | 2025-04-01 19:41:45.871094 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-01 19:41:45.871108 | orchestrator | Tuesday 01 April 2025 19:40:00 +0000 (0:00:02.842) 0:00:31.596 ********* 2025-04-01 19:41:45.871122 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-01 19:41:45.871136 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-01 19:41:45.871150 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-01 19:41:45.871165 | orchestrator | 2025-04-01 19:41:45.871179 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-01 19:41:45.871198 | orchestrator | Tuesday 01 April 2025 19:40:03 +0000 (0:00:02.267) 0:00:33.863 ********* 2025-04-01 19:41:45.871213 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-01 19:41:45.871227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-01 19:41:45.871241 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-01 19:41:45.871255 | orchestrator | 2025-04-01 19:41:45.871269 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-01 19:41:45.871283 | orchestrator | Tuesday 01 April 2025 19:40:05 +0000 (0:00:02.699) 0:00:36.563 ********* 2025-04-01 19:41:45.871297 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.871311 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:41:45.871325 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:41:45.871339 | orchestrator | 2025-04-01 19:41:45.871353 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-01 19:41:45.871367 | orchestrator | Tuesday 01 April 2025 19:40:06 +0000 (0:00:00.818) 0:00:37.382 ********* 2025-04-01 19:41:45.871382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.871404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.871428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:41:45.871443 | orchestrator | 2025-04-01 19:41:45.871491 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-01 19:41:45.871506 | orchestrator | Tuesday 01 April 2025 19:40:08 +0000 (0:00:01.842) 0:00:39.224 ********* 2025-04-01 19:41:45.871563 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:41:45.871581 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:41:45.871596 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:41:45.871610 | orchestrator | 2025-04-01 19:41:45.871625 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-01 19:41:45.871639 | orchestrator | Tuesday 01 April 2025 19:40:09 +0000 (0:00:00.971) 0:00:40.196 ********* 2025-04-01 19:41:45.871653 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:41:45.871667 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:41:45.871690 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:41:45.871704 | orchestrator | 2025-04-01 19:41:45.871719 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-01 19:41:45.871733 | orchestrator | Tuesday 01 April 2025 19:40:14 +0000 (0:00:05.107) 0:00:45.304 ********* 2025-04-01 19:41:45.871747 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:41:45.871761 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:41:45.871775 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:41:45.871789 | orchestrator | 2025-04-01 19:41:45.871803 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-01 19:41:45.871817 | orchestrator | 2025-04-01 19:41:45.871831 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-01 19:41:45.871846 | orchestrator | Tuesday 01 April 2025 19:40:14 +0000 (0:00:00.507) 0:00:45.811 ********* 2025-04-01 19:41:45.871860 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:41:45.871874 | orchestrator | 2025-04-01 19:41:45.871888 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-01 19:41:45.871902 | orchestrator | Tuesday 01 April 2025 19:40:15 +0000 (0:00:00.711) 0:00:46.523 ********* 2025-04-01 19:41:45.871916 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:41:45.871930 | orchestrator | 2025-04-01 19:41:45.871944 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-01 19:41:45.871958 | orchestrator | Tuesday 01 April 2025 19:40:16 +0000 (0:00:00.329) 0:00:46.852 ********* 2025-04-01 19:41:45.871972 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:41:45.871986 | orchestrator | 2025-04-01 19:41:45.872000 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-01 19:41:45.872014 | orchestrator | Tuesday 01 April 2025 19:40:22 +0000 (0:00:06.733) 0:00:53.585 ********* 2025-04-01 19:41:45.872028 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:41:45.872042 | orchestrator | 2025-04-01 19:41:45.872056 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-01 19:41:45.872070 | orchestrator | 2025-04-01 19:41:45.872085 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-01 19:41:45.872099 | orchestrator | Tuesday 01 April 2025 19:41:08 +0000 (0:00:45.956) 0:01:39.542 ********* 2025-04-01 19:41:45.872113 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:41:45.872127 | orchestrator | 2025-04-01 19:41:45.872141 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-01 19:41:45.872156 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.583) 0:01:40.126 ********* 2025-04-01 19:41:45.872170 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:41:45.872184 | orchestrator | 2025-04-01 19:41:45.872198 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-01 19:41:45.872212 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.232) 0:01:40.358 ********* 2025-04-01 19:41:45.872226 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:41:45.872240 | orchestrator | 2025-04-01 19:41:45.872254 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-01 19:41:45.872268 | orchestrator | Tuesday 01 April 2025 19:41:11 +0000 (0:00:02.016) 0:01:42.374 ********* 2025-04-01 19:41:45.872282 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:41:45.872296 | orchestrator | 2025-04-01 19:41:45.872311 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-01 19:41:45.872324 | orchestrator | 2025-04-01 19:41:45.872339 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-01 19:41:45.872353 | orchestrator | Tuesday 01 April 2025 19:41:25 +0000 (0:00:14.175) 0:01:56.550 ********* 2025-04-01 19:41:45.872367 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:41:45.872381 | orchestrator | 2025-04-01 19:41:45.872401 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-01 19:41:45.872415 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:00.523) 0:01:57.073 ********* 2025-04-01 19:41:45.872429 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:41:45.872501 | orchestrator | 2025-04-01 19:41:45.872518 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-01 19:41:45.872541 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:00.266) 0:01:57.340 ********* 2025-04-01 19:41:45.872556 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:41:45.872570 | orchestrator | 2025-04-01 19:41:45.872584 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-01 19:41:45.872599 | orchestrator | Tuesday 01 April 2025 19:41:28 +0000 (0:00:01.664) 0:01:59.005 ********* 2025-04-01 19:41:45.872613 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:41:45.872633 | orchestrator | 2025-04-01 19:41:45.872646 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-01 19:41:45.872658 | orchestrator | 2025-04-01 19:41:45.872671 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-01 19:41:45.872684 | orchestrator | Tuesday 01 April 2025 19:41:39 +0000 (0:00:11.701) 0:02:10.706 ********* 2025-04-01 19:41:45.872697 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:41:45.872710 | orchestrator | 2025-04-01 19:41:45.872722 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-01 19:41:45.872735 | orchestrator | Tuesday 01 April 2025 19:41:40 +0000 (0:00:00.778) 0:02:11.485 ********* 2025-04-01 19:41:45.872748 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-01 19:41:45.872760 | orchestrator | enable_outward_rabbitmq_True 2025-04-01 19:41:45.872773 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-01 19:41:45.872786 | orchestrator | outward_rabbitmq_restart 2025-04-01 19:41:45.872798 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:41:45.872811 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:41:45.872824 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:41:45.872836 | orchestrator | 2025-04-01 19:41:45.872849 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-01 19:41:45.872862 | orchestrator | skipping: no hosts matched 2025-04-01 19:41:45.872874 | orchestrator | 2025-04-01 19:41:45.872887 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-01 19:41:45.872899 | orchestrator | skipping: no hosts matched 2025-04-01 19:41:45.872912 | orchestrator | 2025-04-01 19:41:45.872924 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-01 19:41:45.872937 | orchestrator | skipping: no hosts matched 2025-04-01 19:41:45.872949 | orchestrator | 2025-04-01 19:41:45.872962 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:41:45.872975 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-01 19:41:45.872988 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-01 19:41:45.873001 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:41:45.873014 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:41:45.873026 | orchestrator | 2025-04-01 19:41:45.873039 | orchestrator | 2025-04-01 19:41:45.873052 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:41:45.873064 | orchestrator | Tuesday 01 April 2025 19:41:43 +0000 (0:00:03.063) 0:02:14.548 ********* 2025-04-01 19:41:45.873077 | orchestrator | =============================================================================== 2025-04-01 19:41:45.873089 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 71.83s 2025-04-01 19:41:45.873102 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.41s 2025-04-01 19:41:45.873114 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.11s 2025-04-01 19:41:45.873134 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.52s 2025-04-01 19:41:45.873147 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.06s 2025-04-01 19:41:45.873159 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.84s 2025-04-01 19:41:45.873172 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.70s 2025-04-01 19:41:45.873184 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.66s 2025-04-01 19:41:45.873197 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.36s 2025-04-01 19:41:45.873209 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.32s 2025-04-01 19:41:45.873222 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.27s 2025-04-01 19:41:45.873234 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.94s 2025-04-01 19:41:45.873247 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.84s 2025-04-01 19:41:45.873264 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.82s 2025-04-01 19:41:45.873277 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.77s 2025-04-01 19:41:45.873289 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.69s 2025-04-01 19:41:45.873302 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.64s 2025-04-01 19:41:45.873314 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.43s 2025-04-01 19:41:45.873327 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.39s 2025-04-01 19:41:45.873339 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.30s 2025-04-01 19:41:45.873357 | orchestrator | 2025-04-01 19:41:45 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:45.873466 | orchestrator | 2025-04-01 19:41:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:45.873488 | orchestrator | 2025-04-01 19:41:45 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:45.874980 | orchestrator | 2025-04-01 19:41:45 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:48.940816 | orchestrator | 2025-04-01 19:41:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:48.940936 | orchestrator | 2025-04-01 19:41:48 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:48.943737 | orchestrator | 2025-04-01 19:41:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:48.944388 | orchestrator | 2025-04-01 19:41:48 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:48.945529 | orchestrator | 2025-04-01 19:41:48 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:48.945961 | orchestrator | 2025-04-01 19:41:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:51.998663 | orchestrator | 2025-04-01 19:41:51 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:51.999209 | orchestrator | 2025-04-01 19:41:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:52.015183 | orchestrator | 2025-04-01 19:41:52 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:52.021851 | orchestrator | 2025-04-01 19:41:52 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:55.058240 | orchestrator | 2025-04-01 19:41:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:55.058561 | orchestrator | 2025-04-01 19:41:55 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:55.058820 | orchestrator | 2025-04-01 19:41:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:55.059934 | orchestrator | 2025-04-01 19:41:55 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:55.061075 | orchestrator | 2025-04-01 19:41:55 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:41:58.107847 | orchestrator | 2025-04-01 19:41:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:41:58.107988 | orchestrator | 2025-04-01 19:41:58 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:41:58.110082 | orchestrator | 2025-04-01 19:41:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:41:58.112382 | orchestrator | 2025-04-01 19:41:58 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:41:58.114863 | orchestrator | 2025-04-01 19:41:58 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:01.157786 | orchestrator | 2025-04-01 19:41:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:01.157885 | orchestrator | 2025-04-01 19:42:01 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:01.160091 | orchestrator | 2025-04-01 19:42:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:01.162705 | orchestrator | 2025-04-01 19:42:01 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:01.164684 | orchestrator | 2025-04-01 19:42:01 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:01.165029 | orchestrator | 2025-04-01 19:42:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:04.224690 | orchestrator | 2025-04-01 19:42:04 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:04.227907 | orchestrator | 2025-04-01 19:42:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:04.230139 | orchestrator | 2025-04-01 19:42:04 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:04.232081 | orchestrator | 2025-04-01 19:42:04 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:04.232628 | orchestrator | 2025-04-01 19:42:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:07.303305 | orchestrator | 2025-04-01 19:42:07 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:07.307923 | orchestrator | 2025-04-01 19:42:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:07.309646 | orchestrator | 2025-04-01 19:42:07 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:07.309683 | orchestrator | 2025-04-01 19:42:07 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:10.359577 | orchestrator | 2025-04-01 19:42:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:10.359709 | orchestrator | 2025-04-01 19:42:10 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:10.361117 | orchestrator | 2025-04-01 19:42:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:10.362313 | orchestrator | 2025-04-01 19:42:10 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:10.363754 | orchestrator | 2025-04-01 19:42:10 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:10.363933 | orchestrator | 2025-04-01 19:42:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:13.409946 | orchestrator | 2025-04-01 19:42:13 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:13.412338 | orchestrator | 2025-04-01 19:42:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:13.414138 | orchestrator | 2025-04-01 19:42:13 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:13.415850 | orchestrator | 2025-04-01 19:42:13 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:13.416052 | orchestrator | 2025-04-01 19:42:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:16.459420 | orchestrator | 2025-04-01 19:42:16 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:16.461679 | orchestrator | 2025-04-01 19:42:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:16.465421 | orchestrator | 2025-04-01 19:42:16 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:16.466916 | orchestrator | 2025-04-01 19:42:16 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:19.525909 | orchestrator | 2025-04-01 19:42:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:19.526098 | orchestrator | 2025-04-01 19:42:19 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:22.567030 | orchestrator | 2025-04-01 19:42:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:22.567121 | orchestrator | 2025-04-01 19:42:19 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:22.567135 | orchestrator | 2025-04-01 19:42:19 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:22.567147 | orchestrator | 2025-04-01 19:42:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:22.567172 | orchestrator | 2025-04-01 19:42:22 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:22.569103 | orchestrator | 2025-04-01 19:42:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:22.571583 | orchestrator | 2025-04-01 19:42:22 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:22.572848 | orchestrator | 2025-04-01 19:42:22 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:25.615064 | orchestrator | 2025-04-01 19:42:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:25.615211 | orchestrator | 2025-04-01 19:42:25 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:25.616124 | orchestrator | 2025-04-01 19:42:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:25.627981 | orchestrator | 2025-04-01 19:42:25 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:28.678206 | orchestrator | 2025-04-01 19:42:25 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:28.678325 | orchestrator | 2025-04-01 19:42:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:28.678363 | orchestrator | 2025-04-01 19:42:28 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:28.679250 | orchestrator | 2025-04-01 19:42:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:28.680660 | orchestrator | 2025-04-01 19:42:28 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:28.681931 | orchestrator | 2025-04-01 19:42:28 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:31.730983 | orchestrator | 2025-04-01 19:42:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:31.731113 | orchestrator | 2025-04-01 19:42:31 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:31.731625 | orchestrator | 2025-04-01 19:42:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:31.732418 | orchestrator | 2025-04-01 19:42:31 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:31.733299 | orchestrator | 2025-04-01 19:42:31 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:34.796160 | orchestrator | 2025-04-01 19:42:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:34.796283 | orchestrator | 2025-04-01 19:42:34 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:34.796361 | orchestrator | 2025-04-01 19:42:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:34.798310 | orchestrator | 2025-04-01 19:42:34 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:34.800388 | orchestrator | 2025-04-01 19:42:34 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:34.800851 | orchestrator | 2025-04-01 19:42:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:37.843454 | orchestrator | 2025-04-01 19:42:37 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:37.843894 | orchestrator | 2025-04-01 19:42:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:37.844394 | orchestrator | 2025-04-01 19:42:37 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:37.845119 | orchestrator | 2025-04-01 19:42:37 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:40.889819 | orchestrator | 2025-04-01 19:42:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:40.889937 | orchestrator | 2025-04-01 19:42:40 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:40.890548 | orchestrator | 2025-04-01 19:42:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:40.892541 | orchestrator | 2025-04-01 19:42:40 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:40.893181 | orchestrator | 2025-04-01 19:42:40 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:43.937248 | orchestrator | 2025-04-01 19:42:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:43.937402 | orchestrator | 2025-04-01 19:42:43 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:43.938402 | orchestrator | 2025-04-01 19:42:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:43.940058 | orchestrator | 2025-04-01 19:42:43 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:43.941673 | orchestrator | 2025-04-01 19:42:43 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:43.941992 | orchestrator | 2025-04-01 19:42:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:46.988999 | orchestrator | 2025-04-01 19:42:46 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:46.991396 | orchestrator | 2025-04-01 19:42:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:46.992675 | orchestrator | 2025-04-01 19:42:46 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:46.993900 | orchestrator | 2025-04-01 19:42:46 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:50.047011 | orchestrator | 2025-04-01 19:42:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:50.047150 | orchestrator | 2025-04-01 19:42:50 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:50.048013 | orchestrator | 2025-04-01 19:42:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:50.052455 | orchestrator | 2025-04-01 19:42:50 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:50.053718 | orchestrator | 2025-04-01 19:42:50 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:53.099375 | orchestrator | 2025-04-01 19:42:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:53.099571 | orchestrator | 2025-04-01 19:42:53 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:53.100007 | orchestrator | 2025-04-01 19:42:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:53.100043 | orchestrator | 2025-04-01 19:42:53 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:53.101399 | orchestrator | 2025-04-01 19:42:53 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:56.147226 | orchestrator | 2025-04-01 19:42:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:56.147372 | orchestrator | 2025-04-01 19:42:56 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state STARTED 2025-04-01 19:42:56.151275 | orchestrator | 2025-04-01 19:42:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:42:56.153428 | orchestrator | 2025-04-01 19:42:56 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:42:56.155776 | orchestrator | 2025-04-01 19:42:56 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:42:59.201245 | orchestrator | 2025-04-01 19:42:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:42:59.201388 | orchestrator | 2025-04-01 19:42:59.201409 | orchestrator | 2025-04-01 19:42:59.201425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:42:59.201440 | orchestrator | 2025-04-01 19:42:59.201455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:42:59.201470 | orchestrator | Tuesday 01 April 2025 19:40:32 +0000 (0:00:00.287) 0:00:00.287 ********* 2025-04-01 19:42:59.201517 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.201535 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.201550 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.201565 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:42:59.201579 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:42:59.201593 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:42:59.201607 | orchestrator | 2025-04-01 19:42:59.201722 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:42:59.201738 | orchestrator | Tuesday 01 April 2025 19:40:33 +0000 (0:00:00.984) 0:00:01.272 ********* 2025-04-01 19:42:59.201752 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-01 19:42:59.201766 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-01 19:42:59.201780 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-01 19:42:59.201825 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-01 19:42:59.201842 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-01 19:42:59.201858 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-01 19:42:59.201873 | orchestrator | 2025-04-01 19:42:59.201888 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-01 19:42:59.201903 | orchestrator | 2025-04-01 19:42:59.201918 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-01 19:42:59.201934 | orchestrator | Tuesday 01 April 2025 19:40:35 +0000 (0:00:02.318) 0:00:03.591 ********* 2025-04-01 19:42:59.201950 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:42:59.201967 | orchestrator | 2025-04-01 19:42:59.201983 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-01 19:42:59.201999 | orchestrator | Tuesday 01 April 2025 19:40:37 +0000 (0:00:02.023) 0:00:05.614 ********* 2025-04-01 19:42:59.202076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202243 | orchestrator | 2025-04-01 19:42:59.202258 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-01 19:42:59.202282 | orchestrator | Tuesday 01 April 2025 19:40:39 +0000 (0:00:01.555) 0:00:07.170 ********* 2025-04-01 19:42:59.202305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202391 | orchestrator | 2025-04-01 19:42:59.202405 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-01 19:42:59.202419 | orchestrator | Tuesday 01 April 2025 19:40:41 +0000 (0:00:02.344) 0:00:09.514 ********* 2025-04-01 19:42:59.202433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202568 | orchestrator | 2025-04-01 19:42:59.202582 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-01 19:42:59.202596 | orchestrator | Tuesday 01 April 2025 19:40:43 +0000 (0:00:01.693) 0:00:11.207 ********* 2025-04-01 19:42:59.202611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202717 | orchestrator | 2025-04-01 19:42:59.202732 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-01 19:42:59.202746 | orchestrator | Tuesday 01 April 2025 19:40:45 +0000 (0:00:02.048) 0:00:13.255 ********* 2025-04-01 19:42:59.202760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.202845 | orchestrator | 2025-04-01 19:42:59.202859 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-01 19:42:59.202880 | orchestrator | Tuesday 01 April 2025 19:40:46 +0000 (0:00:01.653) 0:00:14.909 ********* 2025-04-01 19:42:59.202895 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.202910 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.202924 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.202938 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:42:59.202953 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:42:59.202967 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:42:59.202981 | orchestrator | 2025-04-01 19:42:59.202995 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-01 19:42:59.203009 | orchestrator | Tuesday 01 April 2025 19:40:49 +0000 (0:00:03.025) 0:00:17.934 ********* 2025-04-01 19:42:59.203023 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-01 19:42:59.203037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-01 19:42:59.203052 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-01 19:42:59.203071 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-01 19:42:59.203086 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-01 19:42:59.203100 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-01 19:42:59.203114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-01 19:42:59.203128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-01 19:42:59.203142 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-01 19:42:59.203162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-01 19:42:59.203176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-01 19:42:59.203190 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-01 19:42:59.203204 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-01 19:42:59.203221 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-01 19:42:59.203235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-01 19:42:59.203249 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-01 19:42:59.203264 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-01 19:42:59.203278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-01 19:42:59.203292 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-01 19:42:59.203307 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-01 19:42:59.203321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-01 19:42:59.203336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-01 19:42:59.203349 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-01 19:42:59.203364 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-01 19:42:59.203384 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-01 19:42:59.203399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-01 19:42:59.203413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-01 19:42:59.203426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-01 19:42:59.203440 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-01 19:42:59.203454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-01 19:42:59.203468 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-01 19:42:59.203509 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-01 19:42:59.203525 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-01 19:42:59.203539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-01 19:42:59.203553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-01 19:42:59.203567 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-01 19:42:59.203581 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-01 19:42:59.203595 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-01 19:42:59.203610 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-01 19:42:59.203624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-01 19:42:59.203644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-01 19:42:59.203659 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-01 19:42:59.203673 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-01 19:42:59.203687 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-01 19:42:59.203702 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-01 19:42:59.203716 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-01 19:42:59.203730 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-01 19:42:59.203744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-01 19:42:59.203758 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-01 19:42:59.203772 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-01 19:42:59.203787 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-01 19:42:59.203801 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-01 19:42:59.203827 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-01 19:42:59.203842 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-01 19:42:59.203856 | orchestrator | 2025-04-01 19:42:59.203870 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-01 19:42:59.203884 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:19.173) 0:00:37.108 ********* 2025-04-01 19:42:59.203898 | orchestrator | 2025-04-01 19:42:59.203912 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-01 19:42:59.203926 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.090) 0:00:37.198 ********* 2025-04-01 19:42:59.203940 | orchestrator | 2025-04-01 19:42:59.203954 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-01 19:42:59.204002 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.247) 0:00:37.446 ********* 2025-04-01 19:42:59.204018 | orchestrator | 2025-04-01 19:42:59.204032 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-01 19:42:59.204046 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.060) 0:00:37.507 ********* 2025-04-01 19:42:59.204060 | orchestrator | 2025-04-01 19:42:59.204074 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-01 19:42:59.204088 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.091) 0:00:37.598 ********* 2025-04-01 19:42:59.204102 | orchestrator | 2025-04-01 19:42:59.204116 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-01 19:42:59.204130 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.059) 0:00:37.658 ********* 2025-04-01 19:42:59.204144 | orchestrator | 2025-04-01 19:42:59.204158 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-01 19:42:59.204172 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.278) 0:00:37.936 ********* 2025-04-01 19:42:59.204186 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.204201 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:42:59.204215 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.204229 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:42:59.204243 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.204257 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:42:59.204271 | orchestrator | 2025-04-01 19:42:59.204285 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-01 19:42:59.204299 | orchestrator | Tuesday 01 April 2025 19:41:12 +0000 (0:00:02.132) 0:00:40.068 ********* 2025-04-01 19:42:59.204313 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.204328 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.204341 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:42:59.204355 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:42:59.204369 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.204383 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:42:59.204397 | orchestrator | 2025-04-01 19:42:59.204411 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-01 19:42:59.204425 | orchestrator | 2025-04-01 19:42:59.204439 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-01 19:42:59.204454 | orchestrator | Tuesday 01 April 2025 19:41:31 +0000 (0:00:19.329) 0:00:59.398 ********* 2025-04-01 19:42:59.204468 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:42:59.204526 | orchestrator | 2025-04-01 19:42:59.204543 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-01 19:42:59.204558 | orchestrator | Tuesday 01 April 2025 19:41:32 +0000 (0:00:00.948) 0:01:00.346 ********* 2025-04-01 19:42:59.204573 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:42:59.204587 | orchestrator | 2025-04-01 19:42:59.204609 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-01 19:42:59.204644 | orchestrator | Tuesday 01 April 2025 19:41:33 +0000 (0:00:01.114) 0:01:01.461 ********* 2025-04-01 19:42:59.204659 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.204673 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.204687 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.204702 | orchestrator | 2025-04-01 19:42:59.204716 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-01 19:42:59.204730 | orchestrator | Tuesday 01 April 2025 19:41:34 +0000 (0:00:01.166) 0:01:02.627 ********* 2025-04-01 19:42:59.204744 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.204758 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.204772 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.204786 | orchestrator | 2025-04-01 19:42:59.204800 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-01 19:42:59.204814 | orchestrator | Tuesday 01 April 2025 19:41:35 +0000 (0:00:00.493) 0:01:03.121 ********* 2025-04-01 19:42:59.204829 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.204843 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.204857 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.204871 | orchestrator | 2025-04-01 19:42:59.204885 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-01 19:42:59.204899 | orchestrator | Tuesday 01 April 2025 19:41:35 +0000 (0:00:00.634) 0:01:03.756 ********* 2025-04-01 19:42:59.204913 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.204927 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.204941 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.204955 | orchestrator | 2025-04-01 19:42:59.204969 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-01 19:42:59.204983 | orchestrator | Tuesday 01 April 2025 19:41:36 +0000 (0:00:00.875) 0:01:04.632 ********* 2025-04-01 19:42:59.204997 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.205011 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.205025 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.205038 | orchestrator | 2025-04-01 19:42:59.205053 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-01 19:42:59.205067 | orchestrator | Tuesday 01 April 2025 19:41:37 +0000 (0:00:00.447) 0:01:05.080 ********* 2025-04-01 19:42:59.205081 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205095 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205109 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205124 | orchestrator | 2025-04-01 19:42:59.205138 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-01 19:42:59.205152 | orchestrator | Tuesday 01 April 2025 19:41:37 +0000 (0:00:00.540) 0:01:05.621 ********* 2025-04-01 19:42:59.205166 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205187 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205201 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205215 | orchestrator | 2025-04-01 19:42:59.205229 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-01 19:42:59.205243 | orchestrator | Tuesday 01 April 2025 19:41:38 +0000 (0:00:00.571) 0:01:06.192 ********* 2025-04-01 19:42:59.205257 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205271 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205285 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205299 | orchestrator | 2025-04-01 19:42:59.205313 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-01 19:42:59.205328 | orchestrator | Tuesday 01 April 2025 19:41:38 +0000 (0:00:00.471) 0:01:06.664 ********* 2025-04-01 19:42:59.205342 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205356 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205370 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205384 | orchestrator | 2025-04-01 19:42:59.205398 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-01 19:42:59.205412 | orchestrator | Tuesday 01 April 2025 19:41:38 +0000 (0:00:00.303) 0:01:06.968 ********* 2025-04-01 19:42:59.205433 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205447 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205461 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205475 | orchestrator | 2025-04-01 19:42:59.205505 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-01 19:42:59.205521 | orchestrator | Tuesday 01 April 2025 19:41:39 +0000 (0:00:00.513) 0:01:07.481 ********* 2025-04-01 19:42:59.205535 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205549 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205563 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205577 | orchestrator | 2025-04-01 19:42:59.205591 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-01 19:42:59.205606 | orchestrator | Tuesday 01 April 2025 19:41:40 +0000 (0:00:00.515) 0:01:07.996 ********* 2025-04-01 19:42:59.205620 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205634 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205648 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205662 | orchestrator | 2025-04-01 19:42:59.205676 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-01 19:42:59.205690 | orchestrator | Tuesday 01 April 2025 19:41:40 +0000 (0:00:00.556) 0:01:08.552 ********* 2025-04-01 19:42:59.205704 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205718 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205732 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205746 | orchestrator | 2025-04-01 19:42:59.205799 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-01 19:42:59.205816 | orchestrator | Tuesday 01 April 2025 19:41:41 +0000 (0:00:00.442) 0:01:08.994 ********* 2025-04-01 19:42:59.205830 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205844 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205858 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205872 | orchestrator | 2025-04-01 19:42:59.205887 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-01 19:42:59.205901 | orchestrator | Tuesday 01 April 2025 19:41:41 +0000 (0:00:00.637) 0:01:09.632 ********* 2025-04-01 19:42:59.205915 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.205929 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.205943 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.205957 | orchestrator | 2025-04-01 19:42:59.205978 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-01 19:42:59.205993 | orchestrator | Tuesday 01 April 2025 19:41:42 +0000 (0:00:00.485) 0:01:10.118 ********* 2025-04-01 19:42:59.206052 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206071 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206085 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206100 | orchestrator | 2025-04-01 19:42:59.206114 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-01 19:42:59.206134 | orchestrator | Tuesday 01 April 2025 19:41:42 +0000 (0:00:00.531) 0:01:10.649 ********* 2025-04-01 19:42:59.206159 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206174 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206188 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206202 | orchestrator | 2025-04-01 19:42:59.206216 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-01 19:42:59.206230 | orchestrator | Tuesday 01 April 2025 19:41:43 +0000 (0:00:00.357) 0:01:11.007 ********* 2025-04-01 19:42:59.206244 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:42:59.206258 | orchestrator | 2025-04-01 19:42:59.206272 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-01 19:42:59.206286 | orchestrator | Tuesday 01 April 2025 19:41:44 +0000 (0:00:01.802) 0:01:12.810 ********* 2025-04-01 19:42:59.206308 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.206322 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.206336 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.206351 | orchestrator | 2025-04-01 19:42:59.206365 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-01 19:42:59.206379 | orchestrator | Tuesday 01 April 2025 19:41:45 +0000 (0:00:01.157) 0:01:13.967 ********* 2025-04-01 19:42:59.206393 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.206407 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.206421 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.206435 | orchestrator | 2025-04-01 19:42:59.206449 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-01 19:42:59.206463 | orchestrator | Tuesday 01 April 2025 19:41:46 +0000 (0:00:00.946) 0:01:14.913 ********* 2025-04-01 19:42:59.206478 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206509 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206524 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206538 | orchestrator | 2025-04-01 19:42:59.206553 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-01 19:42:59.206567 | orchestrator | Tuesday 01 April 2025 19:41:47 +0000 (0:00:00.649) 0:01:15.563 ********* 2025-04-01 19:42:59.206581 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206595 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206609 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206623 | orchestrator | 2025-04-01 19:42:59.206637 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-01 19:42:59.206651 | orchestrator | Tuesday 01 April 2025 19:41:48 +0000 (0:00:00.689) 0:01:16.252 ********* 2025-04-01 19:42:59.206665 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206680 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206694 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206707 | orchestrator | 2025-04-01 19:42:59.206721 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-01 19:42:59.206736 | orchestrator | Tuesday 01 April 2025 19:41:48 +0000 (0:00:00.557) 0:01:16.809 ********* 2025-04-01 19:42:59.206750 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206764 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206778 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206792 | orchestrator | 2025-04-01 19:42:59.206806 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-01 19:42:59.206820 | orchestrator | Tuesday 01 April 2025 19:41:49 +0000 (0:00:00.872) 0:01:17.682 ********* 2025-04-01 19:42:59.206834 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206854 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206868 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206882 | orchestrator | 2025-04-01 19:42:59.206896 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-01 19:42:59.206910 | orchestrator | Tuesday 01 April 2025 19:41:50 +0000 (0:00:00.555) 0:01:18.238 ********* 2025-04-01 19:42:59.206924 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.206938 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.206952 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.206966 | orchestrator | 2025-04-01 19:42:59.206981 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-01 19:42:59.206994 | orchestrator | Tuesday 01 April 2025 19:41:50 +0000 (0:00:00.526) 0:01:18.764 ********* 2025-04-01 19:42:59.207009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207165 | orchestrator | 2025-04-01 19:42:59.207179 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-01 19:42:59.207193 | orchestrator | Tuesday 01 April 2025 19:41:52 +0000 (0:00:01.685) 0:01:20.450 ********* 2025-04-01 19:42:59.207207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207356 | orchestrator | 2025-04-01 19:42:59.207370 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-01 19:42:59.207385 | orchestrator | Tuesday 01 April 2025 19:41:58 +0000 (0:00:05.851) 0:01:26.301 ********* 2025-04-01 19:42:59.207399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.207916 | orchestrator | 2025-04-01 19:42:59.207933 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-01 19:42:59.207949 | orchestrator | Tuesday 01 April 2025 19:42:00 +0000 (0:00:02.514) 0:01:28.816 ********* 2025-04-01 19:42:59.207964 | orchestrator | 2025-04-01 19:42:59.207979 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-01 19:42:59.207993 | orchestrator | Tuesday 01 April 2025 19:42:00 +0000 (0:00:00.077) 0:01:28.894 ********* 2025-04-01 19:42:59.208034 | orchestrator | 2025-04-01 19:42:59.208049 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-01 19:42:59.208064 | orchestrator | Tuesday 01 April 2025 19:42:00 +0000 (0:00:00.079) 0:01:28.973 ********* 2025-04-01 19:42:59.208077 | orchestrator | 2025-04-01 19:42:59.208091 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-01 19:42:59.208111 | orchestrator | Tuesday 01 April 2025 19:42:01 +0000 (0:00:00.245) 0:01:29.219 ********* 2025-04-01 19:42:59.208126 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.208143 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.208157 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.208171 | orchestrator | 2025-04-01 19:42:59.208186 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-01 19:42:59.208200 | orchestrator | Tuesday 01 April 2025 19:42:08 +0000 (0:00:07.186) 0:01:36.405 ********* 2025-04-01 19:42:59.208214 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.208228 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.208242 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.208256 | orchestrator | 2025-04-01 19:42:59.208270 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-01 19:42:59.208284 | orchestrator | Tuesday 01 April 2025 19:42:11 +0000 (0:00:03.103) 0:01:39.508 ********* 2025-04-01 19:42:59.208298 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.208312 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.208326 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.208340 | orchestrator | 2025-04-01 19:42:59.208354 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-01 19:42:59.208367 | orchestrator | Tuesday 01 April 2025 19:42:14 +0000 (0:00:03.294) 0:01:42.803 ********* 2025-04-01 19:42:59.208381 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.208396 | orchestrator | 2025-04-01 19:42:59.208410 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-01 19:42:59.208424 | orchestrator | Tuesday 01 April 2025 19:42:14 +0000 (0:00:00.123) 0:01:42.926 ********* 2025-04-01 19:42:59.208438 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.208453 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.208467 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.208513 | orchestrator | 2025-04-01 19:42:59.208557 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-01 19:42:59.208573 | orchestrator | Tuesday 01 April 2025 19:42:16 +0000 (0:00:01.113) 0:01:44.040 ********* 2025-04-01 19:42:59.208588 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.208602 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.208616 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.208630 | orchestrator | 2025-04-01 19:42:59.208645 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-01 19:42:59.208659 | orchestrator | Tuesday 01 April 2025 19:42:16 +0000 (0:00:00.499) 0:01:44.540 ********* 2025-04-01 19:42:59.208673 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.208687 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.208701 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.208715 | orchestrator | 2025-04-01 19:42:59.208730 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-01 19:42:59.208744 | orchestrator | Tuesday 01 April 2025 19:42:17 +0000 (0:00:00.892) 0:01:45.432 ********* 2025-04-01 19:42:59.208758 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.208772 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.208787 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.208801 | orchestrator | 2025-04-01 19:42:59.208815 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-01 19:42:59.208829 | orchestrator | Tuesday 01 April 2025 19:42:18 +0000 (0:00:00.611) 0:01:46.043 ********* 2025-04-01 19:42:59.208843 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.208857 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.208871 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.208904 | orchestrator | 2025-04-01 19:42:59.208918 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-01 19:42:59.208932 | orchestrator | Tuesday 01 April 2025 19:42:19 +0000 (0:00:01.361) 0:01:47.405 ********* 2025-04-01 19:42:59.208947 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.208961 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.208975 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.208989 | orchestrator | 2025-04-01 19:42:59.209003 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-01 19:42:59.209017 | orchestrator | Tuesday 01 April 2025 19:42:20 +0000 (0:00:01.013) 0:01:48.418 ********* 2025-04-01 19:42:59.209031 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.209045 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.209059 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.209073 | orchestrator | 2025-04-01 19:42:59.209087 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-01 19:42:59.209101 | orchestrator | Tuesday 01 April 2025 19:42:21 +0000 (0:00:00.592) 0:01:49.011 ********* 2025-04-01 19:42:59.209118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209134 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209163 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209178 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209216 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209239 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209267 | orchestrator | 2025-04-01 19:42:59.209282 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-01 19:42:59.209296 | orchestrator | Tuesday 01 April 2025 19:42:22 +0000 (0:00:01.822) 0:01:50.833 ********* 2025-04-01 19:42:59.209310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209325 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209339 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209359 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209461 | orchestrator | 2025-04-01 19:42:59.209475 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-01 19:42:59.209509 | orchestrator | Tuesday 01 April 2025 19:42:27 +0000 (0:00:05.112) 0:01:55.946 ********* 2025-04-01 19:42:59.209524 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209539 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209554 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209568 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209603 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209668 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 19:42:59.209683 | orchestrator | 2025-04-01 19:42:59.209697 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-01 19:42:59.209712 | orchestrator | Tuesday 01 April 2025 19:42:30 +0000 (0:00:02.754) 0:01:58.700 ********* 2025-04-01 19:42:59.209726 | orchestrator | 2025-04-01 19:42:59.209740 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-01 19:42:59.209754 | orchestrator | Tuesday 01 April 2025 19:42:30 +0000 (0:00:00.245) 0:01:58.946 ********* 2025-04-01 19:42:59.209768 | orchestrator | 2025-04-01 19:42:59.209783 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-01 19:42:59.209797 | orchestrator | Tuesday 01 April 2025 19:42:31 +0000 (0:00:00.074) 0:01:59.020 ********* 2025-04-01 19:42:59.209811 | orchestrator | 2025-04-01 19:42:59.209825 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-01 19:42:59.209840 | orchestrator | Tuesday 01 April 2025 19:42:31 +0000 (0:00:00.064) 0:01:59.084 ********* 2025-04-01 19:42:59.209854 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.209868 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.209882 | orchestrator | 2025-04-01 19:42:59.209896 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-01 19:42:59.209910 | orchestrator | Tuesday 01 April 2025 19:42:38 +0000 (0:00:06.993) 0:02:06.078 ********* 2025-04-01 19:42:59.209925 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.209939 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.209953 | orchestrator | 2025-04-01 19:42:59.209968 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-01 19:42:59.209982 | orchestrator | Tuesday 01 April 2025 19:42:44 +0000 (0:00:06.346) 0:02:12.425 ********* 2025-04-01 19:42:59.209996 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:42:59.210011 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:42:59.210089 | orchestrator | 2025-04-01 19:42:59.210104 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-01 19:42:59.210120 | orchestrator | Tuesday 01 April 2025 19:42:50 +0000 (0:00:06.480) 0:02:18.905 ********* 2025-04-01 19:42:59.210134 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:42:59.210148 | orchestrator | 2025-04-01 19:42:59.210162 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-01 19:42:59.210176 | orchestrator | Tuesday 01 April 2025 19:42:51 +0000 (0:00:00.545) 0:02:19.451 ********* 2025-04-01 19:42:59.210190 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.210204 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.210218 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.210232 | orchestrator | 2025-04-01 19:42:59.210246 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-01 19:42:59.210260 | orchestrator | Tuesday 01 April 2025 19:42:52 +0000 (0:00:00.969) 0:02:20.420 ********* 2025-04-01 19:42:59.210274 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.210288 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.210310 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.210325 | orchestrator | 2025-04-01 19:42:59.210339 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-01 19:42:59.210353 | orchestrator | Tuesday 01 April 2025 19:42:53 +0000 (0:00:00.710) 0:02:21.131 ********* 2025-04-01 19:42:59.210367 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.210381 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.210396 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.210419 | orchestrator | 2025-04-01 19:42:59.210433 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-01 19:42:59.210447 | orchestrator | Tuesday 01 April 2025 19:42:54 +0000 (0:00:01.283) 0:02:22.415 ********* 2025-04-01 19:42:59.210462 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:42:59.210477 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:42:59.210508 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:42:59.210523 | orchestrator | 2025-04-01 19:42:59.210537 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-01 19:42:59.210551 | orchestrator | Tuesday 01 April 2025 19:42:55 +0000 (0:00:00.969) 0:02:23.384 ********* 2025-04-01 19:42:59.210566 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.210579 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.210593 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.210607 | orchestrator | 2025-04-01 19:42:59.210621 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-01 19:42:59.210635 | orchestrator | Tuesday 01 April 2025 19:42:56 +0000 (0:00:00.917) 0:02:24.302 ********* 2025-04-01 19:42:59.210649 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:42:59.210663 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:42:59.210677 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:42:59.210691 | orchestrator | 2025-04-01 19:42:59.210705 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:42:59.210808 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-01 19:42:59.210824 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-01 19:42:59.210848 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-01 19:43:02.260164 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:43:02.260349 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:43:02.260370 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:43:02.260386 | orchestrator | 2025-04-01 19:43:02.260402 | orchestrator | 2025-04-01 19:43:02.260417 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:43:02.260433 | orchestrator | Tuesday 01 April 2025 19:42:57 +0000 (0:00:01.416) 0:02:25.719 ********* 2025-04-01 19:43:02.260447 | orchestrator | =============================================================================== 2025-04-01 19:43:02.260461 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 19.33s 2025-04-01 19:43:02.260475 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.17s 2025-04-01 19:43:02.260520 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.18s 2025-04-01 19:43:02.260535 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.77s 2025-04-01 19:43:02.260550 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.45s 2025-04-01 19:43:02.260564 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.85s 2025-04-01 19:43:02.260614 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.11s 2025-04-01 19:43:02.260630 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.03s 2025-04-01 19:43:02.260644 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.75s 2025-04-01 19:43:02.260660 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.51s 2025-04-01 19:43:02.260676 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.34s 2025-04-01 19:43:02.260691 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.32s 2025-04-01 19:43:02.260707 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.13s 2025-04-01 19:43:02.260723 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.05s 2025-04-01 19:43:02.260738 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.02s 2025-04-01 19:43:02.260754 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.82s 2025-04-01 19:43:02.260770 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.80s 2025-04-01 19:43:02.260785 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.69s 2025-04-01 19:43:02.260800 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2025-04-01 19:43:02.260816 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.65s 2025-04-01 19:43:02.260832 | orchestrator | 2025-04-01 19:42:59 | INFO  | Task cee03166-4fa2-41bb-b051-789750ce0ecc is in state SUCCESS 2025-04-01 19:43:02.260848 | orchestrator | 2025-04-01 19:42:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:02.260863 | orchestrator | 2025-04-01 19:42:59 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:02.260879 | orchestrator | 2025-04-01 19:42:59 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:02.260894 | orchestrator | 2025-04-01 19:42:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:02.260930 | orchestrator | 2025-04-01 19:43:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:05.306118 | orchestrator | 2025-04-01 19:43:02 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:05.306266 | orchestrator | 2025-04-01 19:43:02 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:05.306286 | orchestrator | 2025-04-01 19:43:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:05.306320 | orchestrator | 2025-04-01 19:43:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:05.312826 | orchestrator | 2025-04-01 19:43:05 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:05.315039 | orchestrator | 2025-04-01 19:43:05 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:08.374317 | orchestrator | 2025-04-01 19:43:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:08.374455 | orchestrator | 2025-04-01 19:43:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:08.378698 | orchestrator | 2025-04-01 19:43:08 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:08.380219 | orchestrator | 2025-04-01 19:43:08 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:11.427314 | orchestrator | 2025-04-01 19:43:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:11.427443 | orchestrator | 2025-04-01 19:43:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:11.429277 | orchestrator | 2025-04-01 19:43:11 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:11.430928 | orchestrator | 2025-04-01 19:43:11 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:14.496547 | orchestrator | 2025-04-01 19:43:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:14.496678 | orchestrator | 2025-04-01 19:43:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:14.498926 | orchestrator | 2025-04-01 19:43:14 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:14.501178 | orchestrator | 2025-04-01 19:43:14 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:17.558971 | orchestrator | 2025-04-01 19:43:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:17.559114 | orchestrator | 2025-04-01 19:43:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:17.559651 | orchestrator | 2025-04-01 19:43:17 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:17.561049 | orchestrator | 2025-04-01 19:43:17 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:20.618795 | orchestrator | 2025-04-01 19:43:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:20.618911 | orchestrator | 2025-04-01 19:43:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:20.620115 | orchestrator | 2025-04-01 19:43:20 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:20.622557 | orchestrator | 2025-04-01 19:43:20 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:23.674891 | orchestrator | 2025-04-01 19:43:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:23.675029 | orchestrator | 2025-04-01 19:43:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:23.676086 | orchestrator | 2025-04-01 19:43:23 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:23.677876 | orchestrator | 2025-04-01 19:43:23 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:23.678146 | orchestrator | 2025-04-01 19:43:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:26.723946 | orchestrator | 2025-04-01 19:43:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:26.724941 | orchestrator | 2025-04-01 19:43:26 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:26.726412 | orchestrator | 2025-04-01 19:43:26 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:29.780104 | orchestrator | 2025-04-01 19:43:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:29.780237 | orchestrator | 2025-04-01 19:43:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:29.782405 | orchestrator | 2025-04-01 19:43:29 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:29.784696 | orchestrator | 2025-04-01 19:43:29 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:32.833634 | orchestrator | 2025-04-01 19:43:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:32.833773 | orchestrator | 2025-04-01 19:43:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:32.835014 | orchestrator | 2025-04-01 19:43:32 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:32.835804 | orchestrator | 2025-04-01 19:43:32 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:32.836044 | orchestrator | 2025-04-01 19:43:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:35.895913 | orchestrator | 2025-04-01 19:43:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:35.898072 | orchestrator | 2025-04-01 19:43:35 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:35.900093 | orchestrator | 2025-04-01 19:43:35 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:38.969064 | orchestrator | 2025-04-01 19:43:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:38.969214 | orchestrator | 2025-04-01 19:43:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:38.970483 | orchestrator | 2025-04-01 19:43:38 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:38.971264 | orchestrator | 2025-04-01 19:43:38 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:42.030770 | orchestrator | 2025-04-01 19:43:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:42.030906 | orchestrator | 2025-04-01 19:43:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:42.031110 | orchestrator | 2025-04-01 19:43:42 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:42.032328 | orchestrator | 2025-04-01 19:43:42 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:45.081122 | orchestrator | 2025-04-01 19:43:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:45.081252 | orchestrator | 2025-04-01 19:43:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:45.083892 | orchestrator | 2025-04-01 19:43:45 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:48.128389 | orchestrator | 2025-04-01 19:43:45 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:48.128542 | orchestrator | 2025-04-01 19:43:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:48.128579 | orchestrator | 2025-04-01 19:43:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:48.133049 | orchestrator | 2025-04-01 19:43:48 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:48.135101 | orchestrator | 2025-04-01 19:43:48 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:51.194807 | orchestrator | 2025-04-01 19:43:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:51.194926 | orchestrator | 2025-04-01 19:43:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:51.196490 | orchestrator | 2025-04-01 19:43:51 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:51.199195 | orchestrator | 2025-04-01 19:43:51 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:54.259142 | orchestrator | 2025-04-01 19:43:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:54.259254 | orchestrator | 2025-04-01 19:43:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:54.262216 | orchestrator | 2025-04-01 19:43:54 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:54.263360 | orchestrator | 2025-04-01 19:43:54 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:57.323211 | orchestrator | 2025-04-01 19:43:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:43:57.323345 | orchestrator | 2025-04-01 19:43:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:43:57.324969 | orchestrator | 2025-04-01 19:43:57 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:43:57.327386 | orchestrator | 2025-04-01 19:43:57 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:43:57.327487 | orchestrator | 2025-04-01 19:43:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:00.389899 | orchestrator | 2025-04-01 19:44:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:00.391486 | orchestrator | 2025-04-01 19:44:00 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:00.394068 | orchestrator | 2025-04-01 19:44:00 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:00.394870 | orchestrator | 2025-04-01 19:44:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:03.435158 | orchestrator | 2025-04-01 19:44:03 | INFO  | Task adbceeda-f384-4c11-a1d5-6efd65e55c30 is in state STARTED 2025-04-01 19:44:03.437135 | orchestrator | 2025-04-01 19:44:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:03.437764 | orchestrator | 2025-04-01 19:44:03 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:03.437795 | orchestrator | 2025-04-01 19:44:03 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:06.486652 | orchestrator | 2025-04-01 19:44:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:06.486782 | orchestrator | 2025-04-01 19:44:06 | INFO  | Task adbceeda-f384-4c11-a1d5-6efd65e55c30 is in state STARTED 2025-04-01 19:44:06.487801 | orchestrator | 2025-04-01 19:44:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:06.492419 | orchestrator | 2025-04-01 19:44:06 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:06.493198 | orchestrator | 2025-04-01 19:44:06 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:06.493463 | orchestrator | 2025-04-01 19:44:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:09.558981 | orchestrator | 2025-04-01 19:44:09 | INFO  | Task adbceeda-f384-4c11-a1d5-6efd65e55c30 is in state STARTED 2025-04-01 19:44:09.560827 | orchestrator | 2025-04-01 19:44:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:09.562497 | orchestrator | 2025-04-01 19:44:09 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:09.564535 | orchestrator | 2025-04-01 19:44:09 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:09.564930 | orchestrator | 2025-04-01 19:44:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:12.625224 | orchestrator | 2025-04-01 19:44:12 | INFO  | Task adbceeda-f384-4c11-a1d5-6efd65e55c30 is in state STARTED 2025-04-01 19:44:12.625894 | orchestrator | 2025-04-01 19:44:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:12.627382 | orchestrator | 2025-04-01 19:44:12 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:12.629075 | orchestrator | 2025-04-01 19:44:12 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:15.686304 | orchestrator | 2025-04-01 19:44:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:15.686431 | orchestrator | 2025-04-01 19:44:15 | INFO  | Task adbceeda-f384-4c11-a1d5-6efd65e55c30 is in state STARTED 2025-04-01 19:44:15.689607 | orchestrator | 2025-04-01 19:44:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:15.692761 | orchestrator | 2025-04-01 19:44:15 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:15.693618 | orchestrator | 2025-04-01 19:44:15 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:18.760553 | orchestrator | 2025-04-01 19:44:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:18.760687 | orchestrator | 2025-04-01 19:44:18 | INFO  | Task adbceeda-f384-4c11-a1d5-6efd65e55c30 is in state SUCCESS 2025-04-01 19:44:18.762237 | orchestrator | 2025-04-01 19:44:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:18.762266 | orchestrator | 2025-04-01 19:44:18 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:18.762288 | orchestrator | 2025-04-01 19:44:18 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:21.809903 | orchestrator | 2025-04-01 19:44:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:21.810091 | orchestrator | 2025-04-01 19:44:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:21.810487 | orchestrator | 2025-04-01 19:44:21 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:21.814148 | orchestrator | 2025-04-01 19:44:21 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:24.863104 | orchestrator | 2025-04-01 19:44:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:24.863239 | orchestrator | 2025-04-01 19:44:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:24.864108 | orchestrator | 2025-04-01 19:44:24 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:24.864140 | orchestrator | 2025-04-01 19:44:24 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:27.909802 | orchestrator | 2025-04-01 19:44:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:27.909941 | orchestrator | 2025-04-01 19:44:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:27.910351 | orchestrator | 2025-04-01 19:44:27 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:27.911102 | orchestrator | 2025-04-01 19:44:27 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:30.950259 | orchestrator | 2025-04-01 19:44:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:30.950394 | orchestrator | 2025-04-01 19:44:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:30.951648 | orchestrator | 2025-04-01 19:44:30 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:30.954170 | orchestrator | 2025-04-01 19:44:30 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:30.954560 | orchestrator | 2025-04-01 19:44:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:33.992409 | orchestrator | 2025-04-01 19:44:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:33.992794 | orchestrator | 2025-04-01 19:44:33 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:33.993442 | orchestrator | 2025-04-01 19:44:33 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:37.033151 | orchestrator | 2025-04-01 19:44:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:37.033331 | orchestrator | 2025-04-01 19:44:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:37.033421 | orchestrator | 2025-04-01 19:44:37 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:37.034160 | orchestrator | 2025-04-01 19:44:37 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:40.090124 | orchestrator | 2025-04-01 19:44:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:40.090292 | orchestrator | 2025-04-01 19:44:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:40.093004 | orchestrator | 2025-04-01 19:44:40 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:40.096198 | orchestrator | 2025-04-01 19:44:40 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:40.096867 | orchestrator | 2025-04-01 19:44:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:43.156064 | orchestrator | 2025-04-01 19:44:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:43.162011 | orchestrator | 2025-04-01 19:44:43 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:43.166060 | orchestrator | 2025-04-01 19:44:43 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:46.213851 | orchestrator | 2025-04-01 19:44:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:46.214077 | orchestrator | 2025-04-01 19:44:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:46.217179 | orchestrator | 2025-04-01 19:44:46 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:46.220500 | orchestrator | 2025-04-01 19:44:46 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:46.220784 | orchestrator | 2025-04-01 19:44:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:49.268486 | orchestrator | 2025-04-01 19:44:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:49.269113 | orchestrator | 2025-04-01 19:44:49 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:49.270405 | orchestrator | 2025-04-01 19:44:49 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:52.331768 | orchestrator | 2025-04-01 19:44:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:52.331925 | orchestrator | 2025-04-01 19:44:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:52.337119 | orchestrator | 2025-04-01 19:44:52 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:52.339911 | orchestrator | 2025-04-01 19:44:52 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:55.386132 | orchestrator | 2025-04-01 19:44:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:55.386225 | orchestrator | 2025-04-01 19:44:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:55.387688 | orchestrator | 2025-04-01 19:44:55 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:55.389987 | orchestrator | 2025-04-01 19:44:55 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:58.433480 | orchestrator | 2025-04-01 19:44:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:44:58.433614 | orchestrator | 2025-04-01 19:44:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:44:58.435312 | orchestrator | 2025-04-01 19:44:58 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:44:58.437376 | orchestrator | 2025-04-01 19:44:58 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:44:58.437638 | orchestrator | 2025-04-01 19:44:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:01.508266 | orchestrator | 2025-04-01 19:45:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:01.510343 | orchestrator | 2025-04-01 19:45:01 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:01.513408 | orchestrator | 2025-04-01 19:45:01 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:04.557753 | orchestrator | 2025-04-01 19:45:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:04.557883 | orchestrator | 2025-04-01 19:45:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:04.559778 | orchestrator | 2025-04-01 19:45:04 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:04.561185 | orchestrator | 2025-04-01 19:45:04 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:04.561634 | orchestrator | 2025-04-01 19:45:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:07.624253 | orchestrator | 2025-04-01 19:45:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:07.629214 | orchestrator | 2025-04-01 19:45:07 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:07.631010 | orchestrator | 2025-04-01 19:45:07 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:10.693503 | orchestrator | 2025-04-01 19:45:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:10.693656 | orchestrator | 2025-04-01 19:45:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:13.754799 | orchestrator | 2025-04-01 19:45:10 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:13.754913 | orchestrator | 2025-04-01 19:45:10 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:13.754933 | orchestrator | 2025-04-01 19:45:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:13.754966 | orchestrator | 2025-04-01 19:45:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:13.757834 | orchestrator | 2025-04-01 19:45:13 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:13.759705 | orchestrator | 2025-04-01 19:45:13 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:16.823767 | orchestrator | 2025-04-01 19:45:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:16.823923 | orchestrator | 2025-04-01 19:45:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:16.824286 | orchestrator | 2025-04-01 19:45:16 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:16.825507 | orchestrator | 2025-04-01 19:45:16 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:19.869745 | orchestrator | 2025-04-01 19:45:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:19.869877 | orchestrator | 2025-04-01 19:45:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:19.874735 | orchestrator | 2025-04-01 19:45:19 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:19.876278 | orchestrator | 2025-04-01 19:45:19 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:22.922479 | orchestrator | 2025-04-01 19:45:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:22.922663 | orchestrator | 2025-04-01 19:45:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:22.924646 | orchestrator | 2025-04-01 19:45:22 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:22.926812 | orchestrator | 2025-04-01 19:45:22 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:22.927104 | orchestrator | 2025-04-01 19:45:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:25.986276 | orchestrator | 2025-04-01 19:45:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:25.988167 | orchestrator | 2025-04-01 19:45:25 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:25.989923 | orchestrator | 2025-04-01 19:45:25 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:29.055151 | orchestrator | 2025-04-01 19:45:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:29.055295 | orchestrator | 2025-04-01 19:45:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:29.056327 | orchestrator | 2025-04-01 19:45:29 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:29.058934 | orchestrator | 2025-04-01 19:45:29 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:32.119650 | orchestrator | 2025-04-01 19:45:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:32.119785 | orchestrator | 2025-04-01 19:45:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:35.171842 | orchestrator | 2025-04-01 19:45:32 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:35.171957 | orchestrator | 2025-04-01 19:45:32 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:35.171976 | orchestrator | 2025-04-01 19:45:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:35.172009 | orchestrator | 2025-04-01 19:45:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:35.172585 | orchestrator | 2025-04-01 19:45:35 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:35.175883 | orchestrator | 2025-04-01 19:45:35 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:38.222396 | orchestrator | 2025-04-01 19:45:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:38.222504 | orchestrator | 2025-04-01 19:45:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:38.223794 | orchestrator | 2025-04-01 19:45:38 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:38.223823 | orchestrator | 2025-04-01 19:45:38 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:41.286933 | orchestrator | 2025-04-01 19:45:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:41.287111 | orchestrator | 2025-04-01 19:45:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:41.287994 | orchestrator | 2025-04-01 19:45:41 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:41.289680 | orchestrator | 2025-04-01 19:45:41 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:44.338712 | orchestrator | 2025-04-01 19:45:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:44.338888 | orchestrator | 2025-04-01 19:45:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:44.339400 | orchestrator | 2025-04-01 19:45:44 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:44.339906 | orchestrator | 2025-04-01 19:45:44 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:47.392759 | orchestrator | 2025-04-01 19:45:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:47.392923 | orchestrator | 2025-04-01 19:45:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:47.393288 | orchestrator | 2025-04-01 19:45:47 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:47.394281 | orchestrator | 2025-04-01 19:45:47 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:47.396977 | orchestrator | 2025-04-01 19:45:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:50.445729 | orchestrator | 2025-04-01 19:45:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:50.447837 | orchestrator | 2025-04-01 19:45:50 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:53.496017 | orchestrator | 2025-04-01 19:45:50 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:53.496182 | orchestrator | 2025-04-01 19:45:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:53.496223 | orchestrator | 2025-04-01 19:45:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:53.496954 | orchestrator | 2025-04-01 19:45:53 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:53.498290 | orchestrator | 2025-04-01 19:45:53 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:53.498503 | orchestrator | 2025-04-01 19:45:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:56.553226 | orchestrator | 2025-04-01 19:45:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:56.555048 | orchestrator | 2025-04-01 19:45:56 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:56.556652 | orchestrator | 2025-04-01 19:45:56 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:45:59.611041 | orchestrator | 2025-04-01 19:45:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:45:59.611220 | orchestrator | 2025-04-01 19:45:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:45:59.613392 | orchestrator | 2025-04-01 19:45:59 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:45:59.615323 | orchestrator | 2025-04-01 19:45:59 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:02.665894 | orchestrator | 2025-04-01 19:45:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:02.666197 | orchestrator | 2025-04-01 19:46:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:02.667771 | orchestrator | 2025-04-01 19:46:02 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:02.669022 | orchestrator | 2025-04-01 19:46:02 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:02.669215 | orchestrator | 2025-04-01 19:46:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:05.711742 | orchestrator | 2025-04-01 19:46:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:05.713889 | orchestrator | 2025-04-01 19:46:05 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:05.714339 | orchestrator | 2025-04-01 19:46:05 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:08.765699 | orchestrator | 2025-04-01 19:46:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:08.765854 | orchestrator | 2025-04-01 19:46:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:08.766225 | orchestrator | 2025-04-01 19:46:08 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:08.766256 | orchestrator | 2025-04-01 19:46:08 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:08.766403 | orchestrator | 2025-04-01 19:46:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:11.828205 | orchestrator | 2025-04-01 19:46:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:11.828467 | orchestrator | 2025-04-01 19:46:11 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:11.829760 | orchestrator | 2025-04-01 19:46:11 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:14.890339 | orchestrator | 2025-04-01 19:46:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:14.890554 | orchestrator | 2025-04-01 19:46:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:14.890675 | orchestrator | 2025-04-01 19:46:14 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:14.890699 | orchestrator | 2025-04-01 19:46:14 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:17.948258 | orchestrator | 2025-04-01 19:46:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:17.948421 | orchestrator | 2025-04-01 19:46:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:17.951023 | orchestrator | 2025-04-01 19:46:17 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:17.951056 | orchestrator | 2025-04-01 19:46:17 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:21.014672 | orchestrator | 2025-04-01 19:46:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:21.014840 | orchestrator | 2025-04-01 19:46:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:21.015341 | orchestrator | 2025-04-01 19:46:21 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:21.016935 | orchestrator | 2025-04-01 19:46:21 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:24.061020 | orchestrator | 2025-04-01 19:46:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:24.061166 | orchestrator | 2025-04-01 19:46:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:24.063629 | orchestrator | 2025-04-01 19:46:24 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:24.066012 | orchestrator | 2025-04-01 19:46:24 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:27.114274 | orchestrator | 2025-04-01 19:46:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:27.114433 | orchestrator | 2025-04-01 19:46:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:27.114973 | orchestrator | 2025-04-01 19:46:27 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:27.115007 | orchestrator | 2025-04-01 19:46:27 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:30.161812 | orchestrator | 2025-04-01 19:46:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:30.161986 | orchestrator | 2025-04-01 19:46:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:30.162129 | orchestrator | 2025-04-01 19:46:30 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:30.162651 | orchestrator | 2025-04-01 19:46:30 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:33.214671 | orchestrator | 2025-04-01 19:46:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:33.214849 | orchestrator | 2025-04-01 19:46:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:33.215295 | orchestrator | 2025-04-01 19:46:33 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:33.215335 | orchestrator | 2025-04-01 19:46:33 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:33.215413 | orchestrator | 2025-04-01 19:46:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:36.258396 | orchestrator | 2025-04-01 19:46:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:36.261019 | orchestrator | 2025-04-01 19:46:36 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:36.263819 | orchestrator | 2025-04-01 19:46:36 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:39.325134 | orchestrator | 2025-04-01 19:46:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:39.325300 | orchestrator | 2025-04-01 19:46:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:39.325529 | orchestrator | 2025-04-01 19:46:39 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:39.325563 | orchestrator | 2025-04-01 19:46:39 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:42.367233 | orchestrator | 2025-04-01 19:46:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:42.367417 | orchestrator | 2025-04-01 19:46:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:42.368738 | orchestrator | 2025-04-01 19:46:42 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:42.370286 | orchestrator | 2025-04-01 19:46:42 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:45.428021 | orchestrator | 2025-04-01 19:46:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:45.428186 | orchestrator | 2025-04-01 19:46:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:45.429091 | orchestrator | 2025-04-01 19:46:45 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:45.429764 | orchestrator | 2025-04-01 19:46:45 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:45.430479 | orchestrator | 2025-04-01 19:46:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:48.470809 | orchestrator | 2025-04-01 19:46:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:48.471091 | orchestrator | 2025-04-01 19:46:48 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:48.472224 | orchestrator | 2025-04-01 19:46:48 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:51.517277 | orchestrator | 2025-04-01 19:46:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:51.517449 | orchestrator | 2025-04-01 19:46:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:51.518842 | orchestrator | 2025-04-01 19:46:51 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:51.520828 | orchestrator | 2025-04-01 19:46:51 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:54.569330 | orchestrator | 2025-04-01 19:46:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:54.569495 | orchestrator | 2025-04-01 19:46:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:57.605321 | orchestrator | 2025-04-01 19:46:54 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:57.605455 | orchestrator | 2025-04-01 19:46:54 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:57.605475 | orchestrator | 2025-04-01 19:46:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:46:57.605509 | orchestrator | 2025-04-01 19:46:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:46:57.606910 | orchestrator | 2025-04-01 19:46:57 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:46:57.608480 | orchestrator | 2025-04-01 19:46:57 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:46:57.608636 | orchestrator | 2025-04-01 19:46:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:00.658417 | orchestrator | 2025-04-01 19:47:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:00.658663 | orchestrator | 2025-04-01 19:47:00 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:47:00.658690 | orchestrator | 2025-04-01 19:47:00 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:00.658711 | orchestrator | 2025-04-01 19:47:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:03.707045 | orchestrator | 2025-04-01 19:47:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:03.708833 | orchestrator | 2025-04-01 19:47:03 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:47:03.710330 | orchestrator | 2025-04-01 19:47:03 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:06.751422 | orchestrator | 2025-04-01 19:47:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:06.751635 | orchestrator | 2025-04-01 19:47:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:06.752792 | orchestrator | 2025-04-01 19:47:06 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:47:06.754174 | orchestrator | 2025-04-01 19:47:06 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:09.800486 | orchestrator | 2025-04-01 19:47:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:09.800671 | orchestrator | 2025-04-01 19:47:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:09.802667 | orchestrator | 2025-04-01 19:47:09 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:47:09.805120 | orchestrator | 2025-04-01 19:47:09 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:12.855943 | orchestrator | 2025-04-01 19:47:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:12.856079 | orchestrator | 2025-04-01 19:47:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:12.857678 | orchestrator | 2025-04-01 19:47:12 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state STARTED 2025-04-01 19:47:12.858863 | orchestrator | 2025-04-01 19:47:12 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:15.917398 | orchestrator | 2025-04-01 19:47:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:15.917547 | orchestrator | 2025-04-01 19:47:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:15.925705 | orchestrator | 2025-04-01 19:47:15 | INFO  | Task 8d1f2519-d67f-4717-b92c-08759a4078de is in state SUCCESS 2025-04-01 19:47:15.928157 | orchestrator | 2025-04-01 19:47:15.928197 | orchestrator | None 2025-04-01 19:47:15.928212 | orchestrator | 2025-04-01 19:47:15.928227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:47:15.928242 | orchestrator | 2025-04-01 19:47:15.928291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:47:15.928308 | orchestrator | Tuesday 01 April 2025 19:39:04 +0000 (0:00:00.510) 0:00:00.510 ********* 2025-04-01 19:47:15.928322 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.928407 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.928422 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.928436 | orchestrator | 2025-04-01 19:47:15.928496 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:47:15.928514 | orchestrator | Tuesday 01 April 2025 19:39:05 +0000 (0:00:01.025) 0:00:01.535 ********* 2025-04-01 19:47:15.928530 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-01 19:47:15.928545 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-01 19:47:15.928559 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-01 19:47:15.928573 | orchestrator | 2025-04-01 19:47:15.928625 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-01 19:47:15.928640 | orchestrator | 2025-04-01 19:47:15.928655 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-01 19:47:15.928669 | orchestrator | Tuesday 01 April 2025 19:39:06 +0000 (0:00:00.824) 0:00:02.360 ********* 2025-04-01 19:47:15.928684 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.928699 | orchestrator | 2025-04-01 19:47:15.928713 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-01 19:47:15.928727 | orchestrator | Tuesday 01 April 2025 19:39:08 +0000 (0:00:01.557) 0:00:03.918 ********* 2025-04-01 19:47:15.928741 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.928756 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.928772 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.928788 | orchestrator | 2025-04-01 19:47:15.928803 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-01 19:47:15.928844 | orchestrator | Tuesday 01 April 2025 19:39:10 +0000 (0:00:02.100) 0:00:06.018 ********* 2025-04-01 19:47:15.928860 | orchestrator | included: sysctl for testbed-node-1, testbed-node-0, testbed-node-2 2025-04-01 19:47:15.928876 | orchestrator | 2025-04-01 19:47:15.928891 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-01 19:47:15.928906 | orchestrator | Tuesday 01 April 2025 19:39:11 +0000 (0:00:01.561) 0:00:07.580 ********* 2025-04-01 19:47:15.928921 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.928936 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.928952 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.929034 | orchestrator | 2025-04-01 19:47:15.929050 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-01 19:47:15.929066 | orchestrator | Tuesday 01 April 2025 19:39:13 +0000 (0:00:01.825) 0:00:09.405 ********* 2025-04-01 19:47:15.929081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-01 19:47:15.929097 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-01 19:47:15.929112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-01 19:47:15.929126 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-01 19:47:15.929140 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-01 19:47:15.929154 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-01 19:47:15.929170 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-01 19:47:15.929184 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-01 19:47:15.929198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-01 19:47:15.929212 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-01 19:47:15.929226 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-01 19:47:15.929240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-01 19:47:15.929254 | orchestrator | 2025-04-01 19:47:15.929268 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-01 19:47:15.929282 | orchestrator | Tuesday 01 April 2025 19:39:18 +0000 (0:00:05.030) 0:00:14.436 ********* 2025-04-01 19:47:15.929296 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-01 19:47:15.929319 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-01 19:47:15.929398 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-01 19:47:15.929413 | orchestrator | 2025-04-01 19:47:15.929428 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-01 19:47:15.929442 | orchestrator | Tuesday 01 April 2025 19:39:20 +0000 (0:00:01.775) 0:00:16.211 ********* 2025-04-01 19:47:15.929456 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-01 19:47:15.929471 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-01 19:47:15.929485 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-01 19:47:15.929498 | orchestrator | 2025-04-01 19:47:15.929512 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-01 19:47:15.929526 | orchestrator | Tuesday 01 April 2025 19:39:22 +0000 (0:00:02.518) 0:00:18.730 ********* 2025-04-01 19:47:15.929540 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-01 19:47:15.929607 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.929634 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-01 19:47:15.929649 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.929664 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-01 19:47:15.929677 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.929702 | orchestrator | 2025-04-01 19:47:15.929716 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-01 19:47:15.929730 | orchestrator | Tuesday 01 April 2025 19:39:24 +0000 (0:00:01.111) 0:00:19.842 ********* 2025-04-01 19:47:15.929746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.929766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.929782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.929797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.929812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.929834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.929856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.929872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.929887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.929902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.929916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.929975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.930000 | orchestrator | 2025-04-01 19:47:15.930061 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-01 19:47:15.930080 | orchestrator | Tuesday 01 April 2025 19:39:26 +0000 (0:00:02.302) 0:00:22.144 ********* 2025-04-01 19:47:15.930094 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.930109 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.930123 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.930138 | orchestrator | 2025-04-01 19:47:15.930188 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-01 19:47:15.930204 | orchestrator | Tuesday 01 April 2025 19:39:28 +0000 (0:00:02.158) 0:00:24.302 ********* 2025-04-01 19:47:15.930219 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-01 19:47:15.930233 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-01 19:47:15.930247 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-01 19:47:15.930261 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-01 19:47:15.930275 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-01 19:47:15.930289 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-01 19:47:15.930303 | orchestrator | 2025-04-01 19:47:15.930317 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-01 19:47:15.930332 | orchestrator | Tuesday 01 April 2025 19:39:32 +0000 (0:00:03.565) 0:00:27.868 ********* 2025-04-01 19:47:15.930346 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.930360 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.930374 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.930388 | orchestrator | 2025-04-01 19:47:15.930402 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-01 19:47:15.930416 | orchestrator | Tuesday 01 April 2025 19:39:33 +0000 (0:00:01.770) 0:00:29.639 ********* 2025-04-01 19:47:15.930430 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.930445 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.930459 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.930473 | orchestrator | 2025-04-01 19:47:15.930487 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-01 19:47:15.930547 | orchestrator | Tuesday 01 April 2025 19:39:36 +0000 (0:00:03.097) 0:00:32.737 ********* 2025-04-01 19:47:15.930565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.930602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.930618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.930641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.930665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.930681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.930696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.930712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.930727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.930748 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.930763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.930777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.930792 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.930813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.930828 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.930842 | orchestrator | 2025-04-01 19:47:15.930856 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-01 19:47:15.930871 | orchestrator | Tuesday 01 April 2025 19:39:40 +0000 (0:00:03.644) 0:00:36.381 ********* 2025-04-01 19:47:15.930885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.930900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.930915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.930936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.930958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.930973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.930987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.931002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.931017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.931039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.931147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.931172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.931224 | orchestrator | 2025-04-01 19:47:15.931239 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-01 19:47:15.931253 | orchestrator | Tuesday 01 April 2025 19:39:47 +0000 (0:00:07.232) 0:00:43.614 ********* 2025-04-01 19:47:15.931268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.931283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.931305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.931327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.931342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.931364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.931379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.931394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.931414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.931442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.931456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.931471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.931485 | orchestrator | 2025-04-01 19:47:15.931500 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-01 19:47:15.931514 | orchestrator | Tuesday 01 April 2025 19:39:51 +0000 (0:00:03.729) 0:00:47.343 ********* 2025-04-01 19:47:15.931534 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-01 19:47:15.931549 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-01 19:47:15.931563 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-01 19:47:15.931621 | orchestrator | 2025-04-01 19:47:15.931639 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-01 19:47:15.931654 | orchestrator | Tuesday 01 April 2025 19:39:56 +0000 (0:00:05.292) 0:00:52.636 ********* 2025-04-01 19:47:15.931669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-01 19:47:15.931683 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-01 19:47:15.931697 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-01 19:47:15.931711 | orchestrator | 2025-04-01 19:47:15.931725 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-01 19:47:15.931739 | orchestrator | Tuesday 01 April 2025 19:40:00 +0000 (0:00:03.731) 0:00:56.367 ********* 2025-04-01 19:47:15.931904 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.931921 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.931936 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.931950 | orchestrator | 2025-04-01 19:47:15.931972 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-01 19:47:15.931986 | orchestrator | Tuesday 01 April 2025 19:40:02 +0000 (0:00:01.747) 0:00:58.115 ********* 2025-04-01 19:47:15.932001 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-01 19:47:15.932016 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-01 19:47:15.932031 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-01 19:47:15.932045 | orchestrator | 2025-04-01 19:47:15.932059 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-01 19:47:15.932073 | orchestrator | Tuesday 01 April 2025 19:40:07 +0000 (0:00:04.818) 0:01:02.934 ********* 2025-04-01 19:47:15.932087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-01 19:47:15.932102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-01 19:47:15.932116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-01 19:47:15.932130 | orchestrator | 2025-04-01 19:47:15.932144 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-01 19:47:15.932158 | orchestrator | Tuesday 01 April 2025 19:40:10 +0000 (0:00:02.999) 0:01:05.934 ********* 2025-04-01 19:47:15.932172 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-01 19:47:15.932197 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-01 19:47:15.932212 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-01 19:47:15.932226 | orchestrator | 2025-04-01 19:47:15.932240 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-01 19:47:15.932254 | orchestrator | Tuesday 01 April 2025 19:40:12 +0000 (0:00:02.443) 0:01:08.377 ********* 2025-04-01 19:47:15.932268 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-01 19:47:15.932282 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-01 19:47:15.932296 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-01 19:47:15.932310 | orchestrator | 2025-04-01 19:47:15.932324 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-01 19:47:15.932371 | orchestrator | Tuesday 01 April 2025 19:40:14 +0000 (0:00:02.118) 0:01:10.495 ********* 2025-04-01 19:47:15.932386 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.932401 | orchestrator | 2025-04-01 19:47:15.932415 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-01 19:47:15.932430 | orchestrator | Tuesday 01 April 2025 19:40:15 +0000 (0:00:01.039) 0:01:11.535 ********* 2025-04-01 19:47:15.932445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.932470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.932498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.932514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.932529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.932574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.932623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.932654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.932756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.932800 | orchestrator | 2025-04-01 19:47:15.932816 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-01 19:47:15.932836 | orchestrator | Tuesday 01 April 2025 19:40:19 +0000 (0:00:03.539) 0:01:15.074 ********* 2025-04-01 19:47:15.932851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.932866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.932880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.932895 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.932909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.932924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.932963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.932979 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.932994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.933008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.933023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.933037 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.933051 | orchestrator | 2025-04-01 19:47:15.933065 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-01 19:47:15.933080 | orchestrator | Tuesday 01 April 2025 19:40:20 +0000 (0:00:01.654) 0:01:16.729 ********* 2025-04-01 19:47:15.933143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.933160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.933193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.933209 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.933224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.933238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.933253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.933267 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.933282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-01 19:47:15.933296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-01 19:47:15.933318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-01 19:47:15.933332 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.933347 | orchestrator | 2025-04-01 19:47:15.933361 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-01 19:47:15.933380 | orchestrator | Tuesday 01 April 2025 19:40:22 +0000 (0:00:01.270) 0:01:18.000 ********* 2025-04-01 19:47:15.933395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-01 19:47:15.933409 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-01 19:47:15.933423 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-01 19:47:15.933437 | orchestrator | 2025-04-01 19:47:15.933451 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-01 19:47:15.933465 | orchestrator | Tuesday 01 April 2025 19:40:24 +0000 (0:00:02.012) 0:01:20.012 ********* 2025-04-01 19:47:15.933480 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-01 19:47:15.933494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-01 19:47:15.933508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-01 19:47:15.933522 | orchestrator | 2025-04-01 19:47:15.933690 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-01 19:47:15.933711 | orchestrator | Tuesday 01 April 2025 19:40:26 +0000 (0:00:02.174) 0:01:22.187 ********* 2025-04-01 19:47:15.933726 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-01 19:47:15.933740 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-01 19:47:15.933754 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-01 19:47:15.933768 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-01 19:47:15.933782 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.933796 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-01 19:47:15.933810 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.933824 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-01 19:47:15.933838 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.933853 | orchestrator | 2025-04-01 19:47:15.933867 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-01 19:47:15.933881 | orchestrator | Tuesday 01 April 2025 19:40:28 +0000 (0:00:02.187) 0:01:24.374 ********* 2025-04-01 19:47:15.933906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.933931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.933945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-01 19:47:15.933969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.933984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.934004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-01 19:47:15.934050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.934075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.934090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.934113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.934128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-01 19:47:15.934143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8', '__omit_place_holder__057393d30c57d3e9e8cad94bb9f0def1c5a8bbd8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-01 19:47:15.934158 | orchestrator | 2025-04-01 19:47:15.934172 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-01 19:47:15.934186 | orchestrator | Tuesday 01 April 2025 19:40:32 +0000 (0:00:04.060) 0:01:28.435 ********* 2025-04-01 19:47:15.934201 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.934215 | orchestrator | 2025-04-01 19:47:15.934229 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-01 19:47:15.934244 | orchestrator | Tuesday 01 April 2025 19:40:34 +0000 (0:00:01.539) 0:01:29.975 ********* 2025-04-01 19:47:15.934258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-01 19:47:15.934280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-01 19:47:15.934353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.934405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-01 19:47:15.934422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.934528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.934543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934667 | orchestrator | 2025-04-01 19:47:15.934682 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-01 19:47:15.934703 | orchestrator | Tuesday 01 April 2025 19:40:40 +0000 (0:00:06.053) 0:01:36.029 ********* 2025-04-01 19:47:15.934718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-01 19:47:15.934732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.934757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934794 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.934809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-01 19:47:15.934824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.934846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.934875 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.934933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-01 19:47:15.934973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.935059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.935122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.935148 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.935163 | orchestrator | 2025-04-01 19:47:15.935177 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-01 19:47:15.935191 | orchestrator | Tuesday 01 April 2025 19:40:41 +0000 (0:00:01.085) 0:01:37.114 ********* 2025-04-01 19:47:15.935206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-01 19:47:15.935221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-01 19:47:15.935237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-01 19:47:15.935251 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.935265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-01 19:47:15.935279 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.935300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-01 19:47:15.935314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-01 19:47:15.935328 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.935342 | orchestrator | 2025-04-01 19:47:15.935356 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-01 19:47:15.935371 | orchestrator | Tuesday 01 April 2025 19:40:43 +0000 (0:00:01.904) 0:01:39.019 ********* 2025-04-01 19:47:15.935385 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.935399 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.935413 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.935427 | orchestrator | 2025-04-01 19:47:15.935441 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-01 19:47:15.935455 | orchestrator | Tuesday 01 April 2025 19:40:44 +0000 (0:00:01.445) 0:01:40.465 ********* 2025-04-01 19:47:15.935469 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.935483 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.935497 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.935511 | orchestrator | 2025-04-01 19:47:15.935525 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-01 19:47:15.935539 | orchestrator | Tuesday 01 April 2025 19:40:47 +0000 (0:00:02.370) 0:01:42.835 ********* 2025-04-01 19:47:15.935553 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.935770 | orchestrator | 2025-04-01 19:47:15.935791 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-01 19:47:15.935805 | orchestrator | Tuesday 01 April 2025 19:40:47 +0000 (0:00:00.873) 0:01:43.709 ********* 2025-04-01 19:47:15.935845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.935877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.935893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.935908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.935923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.935947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.935979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.935994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936023 | orchestrator | 2025-04-01 19:47:15.936037 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-01 19:47:15.936051 | orchestrator | Tuesday 01 April 2025 19:40:54 +0000 (0:00:06.887) 0:01:50.596 ********* 2025-04-01 19:47:15.936065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.936087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936123 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.936147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.936162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936187 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.936206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.936234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.936260 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.936273 | orchestrator | 2025-04-01 19:47:15.936286 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-01 19:47:15.936298 | orchestrator | Tuesday 01 April 2025 19:40:55 +0000 (0:00:01.064) 0:01:51.661 ********* 2025-04-01 19:47:15.936311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-01 19:47:15.936324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-01 19:47:15.936337 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.936350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-01 19:47:15.936368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-01 19:47:15.936381 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.936394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-01 19:47:15.936406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-01 19:47:15.936425 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.936437 | orchestrator | 2025-04-01 19:47:15.936450 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-01 19:47:15.936462 | orchestrator | Tuesday 01 April 2025 19:40:56 +0000 (0:00:01.095) 0:01:52.756 ********* 2025-04-01 19:47:15.936474 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.936487 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.936499 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.936511 | orchestrator | 2025-04-01 19:47:15.936524 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-01 19:47:15.936536 | orchestrator | Tuesday 01 April 2025 19:40:58 +0000 (0:00:01.534) 0:01:54.291 ********* 2025-04-01 19:47:15.936549 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.936634 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.936649 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.936661 | orchestrator | 2025-04-01 19:47:15.936674 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-01 19:47:15.936686 | orchestrator | Tuesday 01 April 2025 19:41:00 +0000 (0:00:02.115) 0:01:56.406 ********* 2025-04-01 19:47:15.936698 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.936711 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.936723 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.936735 | orchestrator | 2025-04-01 19:47:15.936754 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-01 19:47:15.936767 | orchestrator | Tuesday 01 April 2025 19:41:00 +0000 (0:00:00.330) 0:01:56.736 ********* 2025-04-01 19:47:15.936779 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.936792 | orchestrator | 2025-04-01 19:47:15.936804 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-01 19:47:15.936817 | orchestrator | Tuesday 01 April 2025 19:41:01 +0000 (0:00:01.014) 0:01:57.750 ********* 2025-04-01 19:47:15.936830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-01 19:47:15.936843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-01 19:47:15.936867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-01 19:47:15.936888 | orchestrator | 2025-04-01 19:47:15.936900 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-01 19:47:15.936913 | orchestrator | Tuesday 01 April 2025 19:41:05 +0000 (0:00:03.182) 0:02:00.933 ********* 2025-04-01 19:47:15.936926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-01 19:47:15.936939 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.936957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-01 19:47:15.936971 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.936984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-01 19:47:15.936996 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.937009 | orchestrator | 2025-04-01 19:47:15.937021 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-01 19:47:15.937034 | orchestrator | Tuesday 01 April 2025 19:41:07 +0000 (0:00:02.042) 0:02:02.976 ********* 2025-04-01 19:47:15.937047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-01 19:47:15.937066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-01 19:47:15.937080 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.937092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-01 19:47:15.937105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-01 19:47:15.937118 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.937131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-01 19:47:15.937154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-01 19:47:15.937167 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.937180 | orchestrator | 2025-04-01 19:47:15.937192 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-01 19:47:15.937205 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:02.750) 0:02:05.727 ********* 2025-04-01 19:47:15.937217 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.937286 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.937300 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.937313 | orchestrator | 2025-04-01 19:47:15.937325 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-01 19:47:15.937338 | orchestrator | Tuesday 01 April 2025 19:41:11 +0000 (0:00:01.093) 0:02:06.821 ********* 2025-04-01 19:47:15.937394 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.937409 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.937421 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.937434 | orchestrator | 2025-04-01 19:47:15.937446 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-01 19:47:15.937458 | orchestrator | Tuesday 01 April 2025 19:41:12 +0000 (0:00:01.683) 0:02:08.504 ********* 2025-04-01 19:47:15.937471 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.937483 | orchestrator | 2025-04-01 19:47:15.937496 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-01 19:47:15.937509 | orchestrator | Tuesday 01 April 2025 19:41:13 +0000 (0:00:01.035) 0:02:09.539 ********* 2025-04-01 19:47:15.937529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.937553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.937619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.937791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937891 | orchestrator | 2025-04-01 19:47:15.937905 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-01 19:47:15.937922 | orchestrator | Tuesday 01 April 2025 19:41:20 +0000 (0:00:06.605) 0:02:16.145 ********* 2025-04-01 19:47:15.937936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.937949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.937978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938160 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.938173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.938187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938309 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.938323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.938335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.938384 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.938396 | orchestrator | 2025-04-01 19:47:15.938409 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-01 19:47:15.938421 | orchestrator | Tuesday 01 April 2025 19:41:21 +0000 (0:00:01.133) 0:02:17.278 ********* 2025-04-01 19:47:15.938434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-01 19:47:15.938453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-01 19:47:15.938476 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.938489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-01 19:47:15.938502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-01 19:47:15.938515 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.938528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-01 19:47:15.938540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-01 19:47:15.938553 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.938565 | orchestrator | 2025-04-01 19:47:15.938596 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-01 19:47:15.938609 | orchestrator | Tuesday 01 April 2025 19:41:22 +0000 (0:00:01.239) 0:02:18.518 ********* 2025-04-01 19:47:15.938622 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.938635 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.938647 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.938659 | orchestrator | 2025-04-01 19:47:15.938672 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-01 19:47:15.938754 | orchestrator | Tuesday 01 April 2025 19:41:24 +0000 (0:00:01.457) 0:02:19.976 ********* 2025-04-01 19:47:15.938768 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.938781 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.938793 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.938806 | orchestrator | 2025-04-01 19:47:15.938837 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-01 19:47:15.938852 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:02.179) 0:02:22.155 ********* 2025-04-01 19:47:15.938903 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.938916 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.938929 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.938941 | orchestrator | 2025-04-01 19:47:15.938954 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-01 19:47:15.938966 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:00.312) 0:02:22.467 ********* 2025-04-01 19:47:15.938979 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.938991 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.939009 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.939022 | orchestrator | 2025-04-01 19:47:15.939034 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-01 19:47:15.939047 | orchestrator | Tuesday 01 April 2025 19:41:27 +0000 (0:00:00.514) 0:02:22.981 ********* 2025-04-01 19:47:15.939059 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.939072 | orchestrator | 2025-04-01 19:47:15.939084 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-01 19:47:15.939097 | orchestrator | Tuesday 01 April 2025 19:41:28 +0000 (0:00:01.223) 0:02:24.205 ********* 2025-04-01 19:47:15.939110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:47:15.939137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:47:15.939152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:47:15.939175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:47:15.939240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:47:15.939311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:47:15.939430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939698 | orchestrator | 2025-04-01 19:47:15.939718 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-01 19:47:15.939732 | orchestrator | Tuesday 01 April 2025 19:41:34 +0000 (0:00:05.702) 0:02:29.907 ********* 2025-04-01 19:47:15.939745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:47:15.939768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:47:15.939782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:47:15.939803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:47:15.939816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.939984 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.939997 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.940010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:47:15.940029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:47:15.940042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.940055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.940075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.940088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.940109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.940122 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.940135 | orchestrator | 2025-04-01 19:47:15.940148 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-01 19:47:15.940164 | orchestrator | Tuesday 01 April 2025 19:41:35 +0000 (0:00:01.597) 0:02:31.504 ********* 2025-04-01 19:47:15.940174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-01 19:47:15.940185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-01 19:47:15.940196 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.940206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-01 19:47:15.940217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-01 19:47:15.940227 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.940237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-01 19:47:15.940247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-01 19:47:15.940257 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.940268 | orchestrator | 2025-04-01 19:47:15.940278 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-01 19:47:15.940288 | orchestrator | Tuesday 01 April 2025 19:41:37 +0000 (0:00:01.752) 0:02:33.257 ********* 2025-04-01 19:47:15.940298 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.940309 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.940319 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.940329 | orchestrator | 2025-04-01 19:47:15.940339 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-01 19:47:15.940349 | orchestrator | Tuesday 01 April 2025 19:41:38 +0000 (0:00:01.453) 0:02:34.711 ********* 2025-04-01 19:47:15.940360 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.940370 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.940380 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.940390 | orchestrator | 2025-04-01 19:47:15.940400 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-01 19:47:15.940410 | orchestrator | Tuesday 01 April 2025 19:41:41 +0000 (0:00:02.326) 0:02:37.038 ********* 2025-04-01 19:47:15.940420 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.940431 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.940441 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.940451 | orchestrator | 2025-04-01 19:47:15.940461 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-01 19:47:15.940476 | orchestrator | Tuesday 01 April 2025 19:41:41 +0000 (0:00:00.578) 0:02:37.617 ********* 2025-04-01 19:47:15.940486 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.940497 | orchestrator | 2025-04-01 19:47:15.940507 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-01 19:47:15.940517 | orchestrator | Tuesday 01 April 2025 19:41:43 +0000 (0:00:01.339) 0:02:38.956 ********* 2025-04-01 19:47:15.940528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 19:47:15.940552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.940591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 19:47:15.940615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.940634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 19:47:15.940658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.940669 | orchestrator | 2025-04-01 19:47:15.940680 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-01 19:47:15.940690 | orchestrator | Tuesday 01 April 2025 19:41:50 +0000 (0:00:07.810) 0:02:46.767 ********* 2025-04-01 19:47:15.940707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 19:47:15.940730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.940741 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.940765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 19:47:15.940782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.940799 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.940810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 19:47:15.940827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.940851 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.940862 | orchestrator | 2025-04-01 19:47:15.940873 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-01 19:47:15.940887 | orchestrator | Tuesday 01 April 2025 19:41:56 +0000 (0:00:05.701) 0:02:52.468 ********* 2025-04-01 19:47:15.940898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-01 19:47:15.940909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-01 19:47:15.940920 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.940931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-01 19:47:15.940947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-01 19:47:15.940963 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.940974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-01 19:47:15.940984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-01 19:47:15.940995 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.941005 | orchestrator | 2025-04-01 19:47:15.941015 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-01 19:47:15.941026 | orchestrator | Tuesday 01 April 2025 19:42:01 +0000 (0:00:05.138) 0:02:57.607 ********* 2025-04-01 19:47:15.941036 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.941046 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.941056 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.941066 | orchestrator | 2025-04-01 19:47:15.941076 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-01 19:47:15.941086 | orchestrator | Tuesday 01 April 2025 19:42:03 +0000 (0:00:01.408) 0:02:59.016 ********* 2025-04-01 19:47:15.941097 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.941107 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.941117 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.941127 | orchestrator | 2025-04-01 19:47:15.941137 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-01 19:47:15.941148 | orchestrator | Tuesday 01 April 2025 19:42:05 +0000 (0:00:02.002) 0:03:01.018 ********* 2025-04-01 19:47:15.941158 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.941168 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.941178 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.941188 | orchestrator | 2025-04-01 19:47:15.941198 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-01 19:47:15.941209 | orchestrator | Tuesday 01 April 2025 19:42:05 +0000 (0:00:00.533) 0:03:01.552 ********* 2025-04-01 19:47:15.941219 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.941229 | orchestrator | 2025-04-01 19:47:15.941239 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-01 19:47:15.941249 | orchestrator | Tuesday 01 April 2025 19:42:07 +0000 (0:00:01.234) 0:03:02.787 ********* 2025-04-01 19:47:15.941261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 19:47:15.941273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 19:47:15.941293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 19:47:15.941304 | orchestrator | 2025-04-01 19:47:15.941315 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-01 19:47:15.941326 | orchestrator | Tuesday 01 April 2025 19:42:11 +0000 (0:00:04.350) 0:03:07.138 ********* 2025-04-01 19:47:15.941337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 19:47:15.941348 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.941358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 19:47:15.941369 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.941386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 19:47:15.941397 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.941407 | orchestrator | 2025-04-01 19:47:15.941417 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-01 19:47:15.941433 | orchestrator | Tuesday 01 April 2025 19:42:11 +0000 (0:00:00.410) 0:03:07.548 ********* 2025-04-01 19:47:15.941443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-01 19:47:15.941461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-01 19:47:15.941472 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.941482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-01 19:47:15.941493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-01 19:47:15.941503 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.941513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-01 19:47:15.941536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-01 19:47:15.941547 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.941557 | orchestrator | 2025-04-01 19:47:15.941567 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-01 19:47:15.941591 | orchestrator | Tuesday 01 April 2025 19:42:12 +0000 (0:00:01.122) 0:03:08.670 ********* 2025-04-01 19:47:15.941603 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.941613 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.941623 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.941633 | orchestrator | 2025-04-01 19:47:15.941644 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-01 19:47:15.941654 | orchestrator | Tuesday 01 April 2025 19:42:14 +0000 (0:00:01.466) 0:03:10.137 ********* 2025-04-01 19:47:15.941664 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.941674 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.941685 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.941695 | orchestrator | 2025-04-01 19:47:15.941705 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-01 19:47:15.941716 | orchestrator | Tuesday 01 April 2025 19:42:16 +0000 (0:00:02.342) 0:03:12.480 ********* 2025-04-01 19:47:15.941726 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.941736 | orchestrator | 2025-04-01 19:47:15.941746 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-04-01 19:47:15.941756 | orchestrator | Tuesday 01 April 2025 19:42:17 +0000 (0:00:01.253) 0:03:13.734 ********* 2025-04-01 19:47:15.941767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.941783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.941795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.941811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.941841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.941861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.941878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.941889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.941900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.941910 | orchestrator | 2025-04-01 19:47:15.941924 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-04-01 19:47:15.941935 | orchestrator | Tuesday 01 April 2025 19:42:27 +0000 (0:00:09.069) 0:03:22.803 ********* 2025-04-01 19:47:15.941946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.941964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.941980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.941990 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.942001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.942040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.942054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.942134 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.942147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.942164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.942175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.942186 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.942196 | orchestrator | 2025-04-01 19:47:15.942207 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-04-01 19:47:15.942217 | orchestrator | Tuesday 01 April 2025 19:42:28 +0000 (0:00:01.133) 0:03:23.936 ********* 2025-04-01 19:47:15.942227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942283 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.942293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942344 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.942355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-01 19:47:15.942396 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.942406 | orchestrator | 2025-04-01 19:47:15.942416 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-04-01 19:47:15.942427 | orchestrator | Tuesday 01 April 2025 19:42:29 +0000 (0:00:01.508) 0:03:25.445 ********* 2025-04-01 19:47:15.942437 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.942447 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.942461 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.942472 | orchestrator | 2025-04-01 19:47:15.942482 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-04-01 19:47:15.942492 | orchestrator | Tuesday 01 April 2025 19:42:31 +0000 (0:00:01.392) 0:03:26.837 ********* 2025-04-01 19:47:15.942502 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.942512 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.942523 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.942533 | orchestrator | 2025-04-01 19:47:15.942547 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-01 19:47:15.942557 | orchestrator | Tuesday 01 April 2025 19:42:33 +0000 (0:00:02.449) 0:03:29.287 ********* 2025-04-01 19:47:15.942567 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.942619 | orchestrator | 2025-04-01 19:47:15.942632 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-01 19:47:15.942643 | orchestrator | Tuesday 01 April 2025 19:42:34 +0000 (0:00:01.145) 0:03:30.432 ********* 2025-04-01 19:47:15.942660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:47:15.942677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:47:15.942693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:47:15.942708 | orchestrator | 2025-04-01 19:47:15.942717 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-01 19:47:15.942726 | orchestrator | Tuesday 01 April 2025 19:42:39 +0000 (0:00:04.858) 0:03:35.291 ********* 2025-04-01 19:47:15.942735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:47:15.942744 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.942759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:47:15.942773 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.942782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:47:15.942796 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.942805 | orchestrator | 2025-04-01 19:47:15.942817 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-01 19:47:15.942826 | orchestrator | Tuesday 01 April 2025 19:42:40 +0000 (0:00:00.949) 0:03:36.240 ********* 2025-04-01 19:47:15.942835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-01 19:47:15.942845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-01 19:47:15.942856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-01 19:47:15.942866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-01 19:47:15.942875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-01 19:47:15.942884 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.942897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-01 19:47:15.942907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-01 19:47:15.942917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-01 19:47:15.942926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-01 19:47:15.942935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-01 19:47:15.942944 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.942953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-01 19:47:15.942966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-01 19:47:15.942978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-01 19:47:15.942988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-01 19:47:15.942997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-01 19:47:15.943005 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.943014 | orchestrator | 2025-04-01 19:47:15.943023 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-01 19:47:15.943031 | orchestrator | Tuesday 01 April 2025 19:42:41 +0000 (0:00:01.393) 0:03:37.634 ********* 2025-04-01 19:47:15.943040 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.943049 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.943057 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.943066 | orchestrator | 2025-04-01 19:47:15.943075 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-01 19:47:15.943083 | orchestrator | Tuesday 01 April 2025 19:42:43 +0000 (0:00:01.449) 0:03:39.083 ********* 2025-04-01 19:47:15.943092 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.943101 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.943109 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.943118 | orchestrator | 2025-04-01 19:47:15.943133 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-01 19:47:15.943142 | orchestrator | Tuesday 01 April 2025 19:42:45 +0000 (0:00:02.433) 0:03:41.517 ********* 2025-04-01 19:47:15.943150 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.943159 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.943168 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.943176 | orchestrator | 2025-04-01 19:47:15.943185 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-01 19:47:15.943194 | orchestrator | Tuesday 01 April 2025 19:42:46 +0000 (0:00:00.577) 0:03:42.095 ********* 2025-04-01 19:47:15.943202 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.943211 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.943220 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.943228 | orchestrator | 2025-04-01 19:47:15.943237 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-01 19:47:15.943246 | orchestrator | Tuesday 01 April 2025 19:42:46 +0000 (0:00:00.317) 0:03:42.412 ********* 2025-04-01 19:47:15.943254 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.943263 | orchestrator | 2025-04-01 19:47:15.943272 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-01 19:47:15.943280 | orchestrator | Tuesday 01 April 2025 19:42:47 +0000 (0:00:01.306) 0:03:43.719 ********* 2025-04-01 19:47:15.943289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:47:15.943303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:47:15.943316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:47:15.943326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:47:15.943336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:47:15.943345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:47:15.943359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:47:15.943373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:47:15.943382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:47:15.943391 | orchestrator | 2025-04-01 19:47:15.943400 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-01 19:47:15.943409 | orchestrator | Tuesday 01 April 2025 19:42:53 +0000 (0:00:05.206) 0:03:48.925 ********* 2025-04-01 19:47:15.943417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:47:15.943432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:47:15.943441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:47:15.943450 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.943464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:47:15.943474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:47:15.943483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:47:15.943492 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.943501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:47:15.943515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:47:15.943524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:47:15.943533 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.943542 | orchestrator | 2025-04-01 19:47:15.943551 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-01 19:47:15.943559 | orchestrator | Tuesday 01 April 2025 19:42:54 +0000 (0:00:01.310) 0:03:50.236 ********* 2025-04-01 19:47:15.943572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-01 19:47:15.943596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-01 19:47:15.943606 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.943615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-01 19:47:15.943624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-01 19:47:15.943633 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.943641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-01 19:47:15.943650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-01 19:47:15.943663 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.943672 | orchestrator | 2025-04-01 19:47:15.943681 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-01 19:47:15.943690 | orchestrator | Tuesday 01 April 2025 19:42:55 +0000 (0:00:01.398) 0:03:51.635 ********* 2025-04-01 19:47:15.943698 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.943707 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.943715 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.943724 | orchestrator | 2025-04-01 19:47:15.943733 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-01 19:47:15.943742 | orchestrator | Tuesday 01 April 2025 19:42:57 +0000 (0:00:01.611) 0:03:53.246 ********* 2025-04-01 19:47:15.943750 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.943759 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.943768 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.943777 | orchestrator | 2025-04-01 19:47:15.943785 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-01 19:47:15.943794 | orchestrator | Tuesday 01 April 2025 19:42:59 +0000 (0:00:02.296) 0:03:55.543 ********* 2025-04-01 19:47:15.943802 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.943811 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.943820 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.943828 | orchestrator | 2025-04-01 19:47:15.943840 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-01 19:47:15.943849 | orchestrator | Tuesday 01 April 2025 19:43:00 +0000 (0:00:00.310) 0:03:55.854 ********* 2025-04-01 19:47:15.943858 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.943867 | orchestrator | 2025-04-01 19:47:15.943875 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-01 19:47:15.943884 | orchestrator | Tuesday 01 April 2025 19:43:01 +0000 (0:00:01.433) 0:03:57.287 ********* 2025-04-01 19:47:15.943893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:47:15.943907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.943917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:47:15.943931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.943940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:47:15.943949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.943958 | orchestrator | 2025-04-01 19:47:15.943967 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-01 19:47:15.943975 | orchestrator | Tuesday 01 April 2025 19:43:06 +0000 (0:00:04.741) 0:04:02.029 ********* 2025-04-01 19:47:15.943989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:47:15.944002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944011 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.944020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:47:15.944029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944038 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.944051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:47:15.944060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944073 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.944082 | orchestrator | 2025-04-01 19:47:15.944091 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-01 19:47:15.944100 | orchestrator | Tuesday 01 April 2025 19:43:07 +0000 (0:00:01.223) 0:04:03.253 ********* 2025-04-01 19:47:15.944109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-01 19:47:15.944117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-01 19:47:15.944130 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.944138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-01 19:47:15.944147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-01 19:47:15.944156 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.944165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-01 19:47:15.944173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-01 19:47:15.944182 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.944190 | orchestrator | 2025-04-01 19:47:15.944199 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-01 19:47:15.944208 | orchestrator | Tuesday 01 April 2025 19:43:08 +0000 (0:00:01.303) 0:04:04.556 ********* 2025-04-01 19:47:15.944216 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.944225 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.944234 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.944242 | orchestrator | 2025-04-01 19:47:15.944251 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-01 19:47:15.944260 | orchestrator | Tuesday 01 April 2025 19:43:10 +0000 (0:00:01.435) 0:04:05.992 ********* 2025-04-01 19:47:15.944268 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.944277 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.944285 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.944294 | orchestrator | 2025-04-01 19:47:15.944303 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-01 19:47:15.944311 | orchestrator | Tuesday 01 April 2025 19:43:12 +0000 (0:00:02.352) 0:04:08.344 ********* 2025-04-01 19:47:15.944320 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.944328 | orchestrator | 2025-04-01 19:47:15.944337 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-01 19:47:15.944345 | orchestrator | Tuesday 01 April 2025 19:43:13 +0000 (0:00:01.310) 0:04:09.654 ********* 2025-04-01 19:47:15.944363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-01 19:47:15.944373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-01 19:47:15.944382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-01 19:47:15.944453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944485 | orchestrator | 2025-04-01 19:47:15.944494 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-01 19:47:15.944502 | orchestrator | Tuesday 01 April 2025 19:43:18 +0000 (0:00:04.467) 0:04:14.121 ********* 2025-04-01 19:47:15.944515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-01 19:47:15.944658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944691 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.944712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-01 19:47:15.944730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944765 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.944774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-01 19:47:15.944784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.944823 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.944837 | orchestrator | 2025-04-01 19:47:15.944846 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-01 19:47:15.944855 | orchestrator | Tuesday 01 April 2025 19:43:19 +0000 (0:00:01.226) 0:04:15.347 ********* 2025-04-01 19:47:15.944864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-01 19:47:15.944877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-01 19:47:15.944886 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.944896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-01 19:47:15.944905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-01 19:47:15.944914 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.944923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-01 19:47:15.944932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-01 19:47:15.944941 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.944949 | orchestrator | 2025-04-01 19:47:15.944958 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-01 19:47:15.944967 | orchestrator | Tuesday 01 April 2025 19:43:20 +0000 (0:00:01.235) 0:04:16.582 ********* 2025-04-01 19:47:15.944975 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.944984 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.944993 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.945001 | orchestrator | 2025-04-01 19:47:15.945010 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-01 19:47:15.945018 | orchestrator | Tuesday 01 April 2025 19:43:22 +0000 (0:00:01.539) 0:04:18.121 ********* 2025-04-01 19:47:15.945027 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.945036 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.945044 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.945053 | orchestrator | 2025-04-01 19:47:15.945062 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-01 19:47:15.945075 | orchestrator | Tuesday 01 April 2025 19:43:24 +0000 (0:00:02.417) 0:04:20.539 ********* 2025-04-01 19:47:15.945083 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.945092 | orchestrator | 2025-04-01 19:47:15.945100 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-01 19:47:15.945126 | orchestrator | Tuesday 01 April 2025 19:43:26 +0000 (0:00:01.572) 0:04:22.112 ********* 2025-04-01 19:47:15.945137 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:47:15.945146 | orchestrator | 2025-04-01 19:47:15.945155 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-01 19:47:15.945164 | orchestrator | Tuesday 01 April 2025 19:43:30 +0000 (0:00:04.126) 0:04:26.239 ********* 2025-04-01 19:47:15.945173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-01 19:47:15.945198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-01 19:47:15.945209 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.945219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-01 19:47:15.945236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-01 19:47:15.945246 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.945266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-01 19:47:15.945279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-01 19:47:15.945294 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.945304 | orchestrator | 2025-04-01 19:47:15.945314 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-01 19:47:15.945324 | orchestrator | Tuesday 01 April 2025 19:43:33 +0000 (0:00:03.489) 0:04:29.728 ********* 2025-04-01 19:47:15.945334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-01 19:47:15.945356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-01 19:47:15.945367 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.945377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-01 19:47:15.945393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-01 19:47:15.945403 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.945423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-01 19:47:15.945436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-01 19:47:15.945450 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.945461 | orchestrator | 2025-04-01 19:47:15.945470 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-01 19:47:15.945480 | orchestrator | Tuesday 01 April 2025 19:43:37 +0000 (0:00:03.320) 0:04:33.048 ********* 2025-04-01 19:47:15.945490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-01 19:47:15.945501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-01 19:47:15.945511 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.945521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-01 19:47:15.945531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-01 19:47:15.945542 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.945556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-01 19:47:15.945572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-01 19:47:15.945632 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.945642 | orchestrator | 2025-04-01 19:47:15.945652 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-01 19:47:15.945661 | orchestrator | Tuesday 01 April 2025 19:43:40 +0000 (0:00:03.401) 0:04:36.450 ********* 2025-04-01 19:47:15.945670 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.945679 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.945688 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.945696 | orchestrator | 2025-04-01 19:47:15.945705 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-01 19:47:15.945714 | orchestrator | Tuesday 01 April 2025 19:43:42 +0000 (0:00:02.311) 0:04:38.762 ********* 2025-04-01 19:47:15.945722 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.945731 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.945740 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.945749 | orchestrator | 2025-04-01 19:47:15.945757 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-01 19:47:15.945766 | orchestrator | Tuesday 01 April 2025 19:43:44 +0000 (0:00:01.973) 0:04:40.736 ********* 2025-04-01 19:47:15.945774 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.945783 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.945792 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.945800 | orchestrator | 2025-04-01 19:47:15.945809 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-01 19:47:15.945818 | orchestrator | Tuesday 01 April 2025 19:43:45 +0000 (0:00:00.320) 0:04:41.057 ********* 2025-04-01 19:47:15.945827 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.945835 | orchestrator | 2025-04-01 19:47:15.945844 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-01 19:47:15.945852 | orchestrator | Tuesday 01 April 2025 19:43:46 +0000 (0:00:01.528) 0:04:42.585 ********* 2025-04-01 19:47:15.945861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-01 19:47:15.945872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-01 19:47:15.945886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-01 19:47:15.945900 | orchestrator | 2025-04-01 19:47:15.945909 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-01 19:47:15.945918 | orchestrator | Tuesday 01 April 2025 19:43:48 +0000 (0:00:01.612) 0:04:44.197 ********* 2025-04-01 19:47:15.945926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-01 19:47:15.945935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-01 19:47:15.945943 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.945952 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.945967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-01 19:47:15.945976 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.945984 | orchestrator | 2025-04-01 19:47:15.945992 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-01 19:47:15.946000 | orchestrator | Tuesday 01 April 2025 19:43:49 +0000 (0:00:00.621) 0:04:44.819 ********* 2025-04-01 19:47:15.946008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-01 19:47:15.946045 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.946060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-01 19:47:15.946069 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.946077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-01 19:47:15.946086 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.946094 | orchestrator | 2025-04-01 19:47:15.946109 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-01 19:47:15.946118 | orchestrator | Tuesday 01 April 2025 19:43:49 +0000 (0:00:00.854) 0:04:45.674 ********* 2025-04-01 19:47:15.946126 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.946134 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.946142 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.946150 | orchestrator | 2025-04-01 19:47:15.946158 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-01 19:47:15.946166 | orchestrator | Tuesday 01 April 2025 19:43:50 +0000 (0:00:00.775) 0:04:46.449 ********* 2025-04-01 19:47:15.946174 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.946182 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.946190 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.946198 | orchestrator | 2025-04-01 19:47:15.946206 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-01 19:47:15.946214 | orchestrator | Tuesday 01 April 2025 19:43:52 +0000 (0:00:02.052) 0:04:48.502 ********* 2025-04-01 19:47:15.946221 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.946229 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.946237 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.946245 | orchestrator | 2025-04-01 19:47:15.946253 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-01 19:47:15.946261 | orchestrator | Tuesday 01 April 2025 19:43:53 +0000 (0:00:00.317) 0:04:48.820 ********* 2025-04-01 19:47:15.946269 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.946277 | orchestrator | 2025-04-01 19:47:15.946285 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-01 19:47:15.946293 | orchestrator | Tuesday 01 April 2025 19:43:54 +0000 (0:00:01.805) 0:04:50.625 ********* 2025-04-01 19:47:15.946301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:47:15.946311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:47:15.946360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.946421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.946438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.946479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:47:15.946488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.946497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:47:15.946553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.946625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.946642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:47:15.946651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.946692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.946710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:47:15.946758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.946810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.946831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.946839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.946866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.946875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946883 | orchestrator | 2025-04-01 19:47:15.946891 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-01 19:47:15.946899 | orchestrator | Tuesday 01 April 2025 19:44:00 +0000 (0:00:05.798) 0:04:56.423 ********* 2025-04-01 19:47:15.946912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:47:15.946920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:47:15.946968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:47:15.946977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.946998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.947060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:47:15.947077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:47:15.947089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.947111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.947192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.947217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:47:15.947251 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.947259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.947282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.947324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.947355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947378 | orchestrator | 2025-04-01 19:47:15 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:15.947471 | orchestrator | 2025-04-01 19:47:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:15.947510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.947521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.947530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.947546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:47:15.947555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947568 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.947614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:47:15.947693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:47:15.947701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.947710 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.947718 | orchestrator | 2025-04-01 19:47:15.947726 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-01 19:47:15.947741 | orchestrator | Tuesday 01 April 2025 19:44:03 +0000 (0:00:02.460) 0:04:58.884 ********* 2025-04-01 19:47:15.947750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-01 19:47:15.947761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-01 19:47:15.947769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-01 19:47:15.947783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-01 19:47:15.947792 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.947800 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.947809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-01 19:47:15.947817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-01 19:47:15.947825 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.947833 | orchestrator | 2025-04-01 19:47:15.947841 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-01 19:47:15.947849 | orchestrator | Tuesday 01 April 2025 19:44:05 +0000 (0:00:02.545) 0:05:01.430 ********* 2025-04-01 19:47:15.947857 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.947864 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.947888 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.947896 | orchestrator | 2025-04-01 19:47:15.947903 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-01 19:47:15.947910 | orchestrator | Tuesday 01 April 2025 19:44:07 +0000 (0:00:01.549) 0:05:02.980 ********* 2025-04-01 19:47:15.947917 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.947924 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.947931 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.947938 | orchestrator | 2025-04-01 19:47:15.947945 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-01 19:47:15.947952 | orchestrator | Tuesday 01 April 2025 19:44:09 +0000 (0:00:02.659) 0:05:05.639 ********* 2025-04-01 19:47:15.947959 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.947966 | orchestrator | 2025-04-01 19:47:15.947973 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-01 19:47:15.947980 | orchestrator | Tuesday 01 April 2025 19:44:11 +0000 (0:00:02.124) 0:05:07.764 ********* 2025-04-01 19:47:15.947988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.947995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.948007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.948014 | orchestrator | 2025-04-01 19:47:15.948039 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-01 19:47:15.948046 | orchestrator | Tuesday 01 April 2025 19:44:18 +0000 (0:00:06.171) 0:05:13.935 ********* 2025-04-01 19:47:15.948076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.948085 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.948092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.948100 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.948107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.948118 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.948125 | orchestrator | 2025-04-01 19:47:15.948132 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-01 19:47:15.948139 | orchestrator | Tuesday 01 April 2025 19:44:18 +0000 (0:00:00.526) 0:05:14.462 ********* 2025-04-01 19:47:15.948147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948161 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.948168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948182 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.948189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948204 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.948211 | orchestrator | 2025-04-01 19:47:15.948218 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-01 19:47:15.948238 | orchestrator | Tuesday 01 April 2025 19:44:19 +0000 (0:00:01.260) 0:05:15.722 ********* 2025-04-01 19:47:15.948246 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.948252 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.948259 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.948267 | orchestrator | 2025-04-01 19:47:15.948274 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-01 19:47:15.948281 | orchestrator | Tuesday 01 April 2025 19:44:21 +0000 (0:00:01.628) 0:05:17.351 ********* 2025-04-01 19:47:15.948288 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.948295 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.948302 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.948310 | orchestrator | 2025-04-01 19:47:15.948318 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-01 19:47:15.948326 | orchestrator | Tuesday 01 April 2025 19:44:23 +0000 (0:00:02.206) 0:05:19.558 ********* 2025-04-01 19:47:15.948333 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.948341 | orchestrator | 2025-04-01 19:47:15.948349 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-01 19:47:15.948356 | orchestrator | Tuesday 01 April 2025 19:44:25 +0000 (0:00:01.659) 0:05:21.217 ********* 2025-04-01 19:47:15.948364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.948383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.948422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.948458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948488 | orchestrator | 2025-04-01 19:47:15.948495 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-01 19:47:15.948503 | orchestrator | Tuesday 01 April 2025 19:44:31 +0000 (0:00:06.412) 0:05:27.630 ********* 2025-04-01 19:47:15.948511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.948530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948546 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.948554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.948575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948611 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.948625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.948634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.948650 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.948658 | orchestrator | 2025-04-01 19:47:15.948665 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-01 19:47:15.948672 | orchestrator | Tuesday 01 April 2025 19:44:33 +0000 (0:00:01.317) 0:05:28.948 ********* 2025-04-01 19:47:15.948679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948727 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.948735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948763 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.948771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-01 19:47:15.948799 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.948806 | orchestrator | 2025-04-01 19:47:15.948813 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-01 19:47:15.948820 | orchestrator | Tuesday 01 April 2025 19:44:34 +0000 (0:00:01.486) 0:05:30.434 ********* 2025-04-01 19:47:15.948827 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.948834 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.948841 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.948848 | orchestrator | 2025-04-01 19:47:15.948855 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-01 19:47:15.948862 | orchestrator | Tuesday 01 April 2025 19:44:36 +0000 (0:00:01.556) 0:05:31.991 ********* 2025-04-01 19:47:15.948869 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.948876 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.948883 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.948890 | orchestrator | 2025-04-01 19:47:15.948897 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-01 19:47:15.948904 | orchestrator | Tuesday 01 April 2025 19:44:38 +0000 (0:00:02.574) 0:05:34.565 ********* 2025-04-01 19:47:15.948911 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.948918 | orchestrator | 2025-04-01 19:47:15.948928 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-01 19:47:15.948936 | orchestrator | Tuesday 01 April 2025 19:44:40 +0000 (0:00:01.862) 0:05:36.428 ********* 2025-04-01 19:47:15.948948 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-01 19:47:15.948955 | orchestrator | 2025-04-01 19:47:15.948962 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-01 19:47:15.948969 | orchestrator | Tuesday 01 April 2025 19:44:42 +0000 (0:00:01.386) 0:05:37.815 ********* 2025-04-01 19:47:15.948990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-01 19:47:15.948998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-01 19:47:15.949005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-01 19:47:15.949013 | orchestrator | 2025-04-01 19:47:15.949020 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-01 19:47:15.949027 | orchestrator | Tuesday 01 April 2025 19:44:48 +0000 (0:00:06.487) 0:05:44.302 ********* 2025-04-01 19:47:15.949039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949047 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949062 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949080 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949087 | orchestrator | 2025-04-01 19:47:15.949094 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-01 19:47:15.949101 | orchestrator | Tuesday 01 April 2025 19:44:50 +0000 (0:00:02.192) 0:05:46.495 ********* 2025-04-01 19:47:15.949108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-01 19:47:15.949115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-01 19:47:15.949123 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-01 19:47:15.949153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-01 19:47:15.949161 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-01 19:47:15.949176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-01 19:47:15.949183 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949190 | orchestrator | 2025-04-01 19:47:15.949197 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-01 19:47:15.949204 | orchestrator | Tuesday 01 April 2025 19:44:52 +0000 (0:00:01.966) 0:05:48.462 ********* 2025-04-01 19:47:15.949211 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.949218 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.949225 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.949232 | orchestrator | 2025-04-01 19:47:15.949239 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-01 19:47:15.949246 | orchestrator | Tuesday 01 April 2025 19:44:55 +0000 (0:00:03.105) 0:05:51.567 ********* 2025-04-01 19:47:15.949253 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.949260 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.949267 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.949274 | orchestrator | 2025-04-01 19:47:15.949281 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-01 19:47:15.949288 | orchestrator | Tuesday 01 April 2025 19:44:59 +0000 (0:00:03.556) 0:05:55.124 ********* 2025-04-01 19:47:15.949301 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-04-01 19:47:15.949308 | orchestrator | 2025-04-01 19:47:15.949315 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-01 19:47:15.949322 | orchestrator | Tuesday 01 April 2025 19:45:00 +0000 (0:00:01.512) 0:05:56.636 ********* 2025-04-01 19:47:15.949329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949340 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949355 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949370 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949376 | orchestrator | 2025-04-01 19:47:15.949383 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-01 19:47:15.949390 | orchestrator | Tuesday 01 April 2025 19:45:02 +0000 (0:00:01.704) 0:05:58.340 ********* 2025-04-01 19:47:15.949415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949424 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949439 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-01 19:47:15.949453 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949460 | orchestrator | 2025-04-01 19:47:15.949467 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-01 19:47:15.949474 | orchestrator | Tuesday 01 April 2025 19:45:04 +0000 (0:00:01.790) 0:06:00.131 ********* 2025-04-01 19:47:15.949481 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949488 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949499 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949506 | orchestrator | 2025-04-01 19:47:15.949514 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-01 19:47:15.949521 | orchestrator | Tuesday 01 April 2025 19:45:06 +0000 (0:00:02.138) 0:06:02.269 ********* 2025-04-01 19:47:15.949528 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.949535 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.949545 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.949552 | orchestrator | 2025-04-01 19:47:15.949559 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-01 19:47:15.949566 | orchestrator | Tuesday 01 April 2025 19:45:09 +0000 (0:00:02.892) 0:06:05.162 ********* 2025-04-01 19:47:15.949573 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.949592 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.949600 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.949607 | orchestrator | 2025-04-01 19:47:15.949614 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-01 19:47:15.949621 | orchestrator | Tuesday 01 April 2025 19:45:12 +0000 (0:00:03.505) 0:06:08.667 ********* 2025-04-01 19:47:15.949628 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-01 19:47:15.949635 | orchestrator | 2025-04-01 19:47:15.949642 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-01 19:47:15.949649 | orchestrator | Tuesday 01 April 2025 19:45:14 +0000 (0:00:01.649) 0:06:10.317 ********* 2025-04-01 19:47:15.949656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-01 19:47:15.949663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-01 19:47:15.949670 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949677 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-01 19:47:15.949707 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949715 | orchestrator | 2025-04-01 19:47:15.949722 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-01 19:47:15.949729 | orchestrator | Tuesday 01 April 2025 19:45:16 +0000 (0:00:02.199) 0:06:12.516 ********* 2025-04-01 19:47:15.949736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-01 19:47:15.949748 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-01 19:47:15.949769 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-01 19:47:15.949783 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949790 | orchestrator | 2025-04-01 19:47:15.949797 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-01 19:47:15.949804 | orchestrator | Tuesday 01 April 2025 19:45:18 +0000 (0:00:01.573) 0:06:14.090 ********* 2025-04-01 19:47:15.949811 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.949818 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.949825 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.949832 | orchestrator | 2025-04-01 19:47:15.949839 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-01 19:47:15.949846 | orchestrator | Tuesday 01 April 2025 19:45:20 +0000 (0:00:02.290) 0:06:16.381 ********* 2025-04-01 19:47:15.949853 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.949860 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.949867 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.949874 | orchestrator | 2025-04-01 19:47:15.949881 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-01 19:47:15.949891 | orchestrator | Tuesday 01 April 2025 19:45:23 +0000 (0:00:03.032) 0:06:19.414 ********* 2025-04-01 19:47:15.949899 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.949906 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.949913 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.949920 | orchestrator | 2025-04-01 19:47:15.949927 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-01 19:47:15.949934 | orchestrator | Tuesday 01 April 2025 19:45:27 +0000 (0:00:03.576) 0:06:22.990 ********* 2025-04-01 19:47:15.949941 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.949948 | orchestrator | 2025-04-01 19:47:15.949955 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-01 19:47:15.949962 | orchestrator | Tuesday 01 April 2025 19:45:28 +0000 (0:00:01.744) 0:06:24.735 ********* 2025-04-01 19:47:15.949982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.949995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 19:47:15.950003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.950052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.950059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 19:47:15.950087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.950116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.950123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 19:47:15.950131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.950177 | orchestrator | 2025-04-01 19:47:15.950185 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-01 19:47:15.950192 | orchestrator | Tuesday 01 April 2025 19:45:33 +0000 (0:00:04.801) 0:06:29.536 ********* 2025-04-01 19:47:15.950199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.950207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 19:47:15.950214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.950255 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.950269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.950277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 19:47:15.950284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.950310 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.950335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.950344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 19:47:15.950351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 19:47:15.950366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:47:15.950377 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.950384 | orchestrator | 2025-04-01 19:47:15.950391 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-01 19:47:15.950398 | orchestrator | Tuesday 01 April 2025 19:45:34 +0000 (0:00:01.063) 0:06:30.600 ********* 2025-04-01 19:47:15.950405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-01 19:47:15.950412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-01 19:47:15.950420 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.950427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-01 19:47:15.950434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-01 19:47:15.950441 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.950462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-01 19:47:15.950470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-01 19:47:15.950477 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.950484 | orchestrator | 2025-04-01 19:47:15.950491 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-01 19:47:15.950498 | orchestrator | Tuesday 01 April 2025 19:45:36 +0000 (0:00:01.592) 0:06:32.192 ********* 2025-04-01 19:47:15.950505 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.950512 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.950519 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.950526 | orchestrator | 2025-04-01 19:47:15.950533 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-01 19:47:15.950540 | orchestrator | Tuesday 01 April 2025 19:45:38 +0000 (0:00:01.675) 0:06:33.868 ********* 2025-04-01 19:47:15.950547 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.950555 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.950561 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.950568 | orchestrator | 2025-04-01 19:47:15.950575 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-01 19:47:15.950595 | orchestrator | Tuesday 01 April 2025 19:45:40 +0000 (0:00:02.838) 0:06:36.706 ********* 2025-04-01 19:47:15.950602 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.950609 | orchestrator | 2025-04-01 19:47:15.950616 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-01 19:47:15.950623 | orchestrator | Tuesday 01 April 2025 19:45:42 +0000 (0:00:01.891) 0:06:38.598 ********* 2025-04-01 19:47:15.950630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:47:15.950647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:47:15.950655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:47:15.950677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:47:15.950686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:47:15.950703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:47:15.950711 | orchestrator | 2025-04-01 19:47:15.950718 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-01 19:47:15.950781 | orchestrator | Tuesday 01 April 2025 19:45:50 +0000 (0:00:07.310) 0:06:45.908 ********* 2025-04-01 19:47:15.950803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:47:15.950812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:47:15.950819 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.950827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:47:15.950838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:47:15.950846 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.950853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:47:15.950874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:47:15.950883 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.950890 | orchestrator | 2025-04-01 19:47:15.950897 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-01 19:47:15.950904 | orchestrator | Tuesday 01 April 2025 19:45:51 +0000 (0:00:01.004) 0:06:46.913 ********* 2025-04-01 19:47:15.950911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-01 19:47:15.950922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-01 19:47:15.950930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-01 19:47:15.950937 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.950944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-01 19:47:15.950951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-01 19:47:15.950958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-01 19:47:15.950965 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.950976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-01 19:47:15.950983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-01 19:47:15.950990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-01 19:47:15.950997 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.951004 | orchestrator | 2025-04-01 19:47:15.951011 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-01 19:47:15.951018 | orchestrator | Tuesday 01 April 2025 19:45:52 +0000 (0:00:01.502) 0:06:48.416 ********* 2025-04-01 19:47:15.951025 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.951032 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.951039 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.951046 | orchestrator | 2025-04-01 19:47:15.951053 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-01 19:47:15.951060 | orchestrator | Tuesday 01 April 2025 19:45:53 +0000 (0:00:00.489) 0:06:48.906 ********* 2025-04-01 19:47:15.951067 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.951074 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.951081 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.951088 | orchestrator | 2025-04-01 19:47:15.951095 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-01 19:47:15.951102 | orchestrator | Tuesday 01 April 2025 19:45:54 +0000 (0:00:01.832) 0:06:50.739 ********* 2025-04-01 19:47:15.951122 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.951129 | orchestrator | 2025-04-01 19:47:15.951136 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-01 19:47:15.951143 | orchestrator | Tuesday 01 April 2025 19:45:56 +0000 (0:00:01.972) 0:06:52.711 ********* 2025-04-01 19:47:15.951151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 19:47:15.951162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 19:47:15.951170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 19:47:15.951215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 19:47:15.951228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 19:47:15.951257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 19:47:15.951264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 19:47:15.951312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 19:47:15.951319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 19:47:15.951377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 19:47:15.951384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 19:47:15.951405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 19:47:15.951428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951472 | orchestrator | 2025-04-01 19:47:15.951480 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-01 19:47:15.951487 | orchestrator | Tuesday 01 April 2025 19:46:02 +0000 (0:00:05.537) 0:06:58.249 ********* 2025-04-01 19:47:15.951494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 19:47:15.951501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 19:47:15.951508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 19:47:15.951545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 19:47:15.951552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 19:47:15.951567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 19:47:15.951626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951656 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.951663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 19:47:15.951685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 19:47:15.951693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951722 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.951729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 19:47:15.951740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 19:47:15.951748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 19:47:15.951781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 19:47:15.951792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 19:47:15.951818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:47:15.951825 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.951832 | orchestrator | 2025-04-01 19:47:15.951839 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-01 19:47:15.951846 | orchestrator | Tuesday 01 April 2025 19:46:04 +0000 (0:00:01.806) 0:07:00.056 ********* 2025-04-01 19:47:15.951853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-01 19:47:15.951860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-01 19:47:15.951867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-01 19:47:15.951875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-01 19:47:15.951886 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.951893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-01 19:47:15.951900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-01 19:47:15.951907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-01 19:47:15.951915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-01 19:47:15.951925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-01 19:47:15.951932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-01 19:47:15.951939 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.951946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-01 19:47:15.951956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-01 19:47:15.951964 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.951971 | orchestrator | 2025-04-01 19:47:15.951978 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-01 19:47:15.951985 | orchestrator | Tuesday 01 April 2025 19:46:05 +0000 (0:00:01.700) 0:07:01.756 ********* 2025-04-01 19:47:15.951992 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.951999 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952009 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952016 | orchestrator | 2025-04-01 19:47:15.952022 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-01 19:47:15.952029 | orchestrator | Tuesday 01 April 2025 19:46:06 +0000 (0:00:00.765) 0:07:02.522 ********* 2025-04-01 19:47:15.952035 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952041 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952047 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952056 | orchestrator | 2025-04-01 19:47:15.952062 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-01 19:47:15.952068 | orchestrator | Tuesday 01 April 2025 19:46:08 +0000 (0:00:02.159) 0:07:04.681 ********* 2025-04-01 19:47:15.952075 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.952081 | orchestrator | 2025-04-01 19:47:15.952087 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-01 19:47:15.952093 | orchestrator | Tuesday 01 April 2025 19:46:10 +0000 (0:00:01.976) 0:07:06.657 ********* 2025-04-01 19:47:15.952100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:47:15.952110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:47:15.952120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-01 19:47:15.952127 | orchestrator | 2025-04-01 19:47:15.952133 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-01 19:47:15.952139 | orchestrator | Tuesday 01 April 2025 19:46:14 +0000 (0:00:03.181) 0:07:09.838 ********* 2025-04-01 19:47:15.952145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-01 19:47:15.952155 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-01 19:47:15.952169 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-01 19:47:15.952182 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952189 | orchestrator | 2025-04-01 19:47:15.952195 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-01 19:47:15.952201 | orchestrator | Tuesday 01 April 2025 19:46:14 +0000 (0:00:00.749) 0:07:10.588 ********* 2025-04-01 19:47:15.952207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-01 19:47:15.952214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-01 19:47:15.952220 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952226 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-01 19:47:15.952241 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952247 | orchestrator | 2025-04-01 19:47:15.952254 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-01 19:47:15.952260 | orchestrator | Tuesday 01 April 2025 19:46:16 +0000 (0:00:01.237) 0:07:11.826 ********* 2025-04-01 19:47:15.952266 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952272 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952279 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952285 | orchestrator | 2025-04-01 19:47:15.952291 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-01 19:47:15.952301 | orchestrator | Tuesday 01 April 2025 19:46:16 +0000 (0:00:00.535) 0:07:12.362 ********* 2025-04-01 19:47:15.952307 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952313 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952319 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952325 | orchestrator | 2025-04-01 19:47:15.952332 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-01 19:47:15.952338 | orchestrator | Tuesday 01 April 2025 19:46:18 +0000 (0:00:02.417) 0:07:14.779 ********* 2025-04-01 19:47:15.952344 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:47:15.952350 | orchestrator | 2025-04-01 19:47:15.952356 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-01 19:47:15.952362 | orchestrator | Tuesday 01 April 2025 19:46:21 +0000 (0:00:02.055) 0:07:16.834 ********* 2025-04-01 19:47:15.952369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.952384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.952391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.952401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.952412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.952419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-01 19:47:15.952425 | orchestrator | 2025-04-01 19:47:15.952431 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-01 19:47:15.952438 | orchestrator | Tuesday 01 April 2025 19:46:29 +0000 (0:00:08.854) 0:07:25.689 ********* 2025-04-01 19:47:15.952449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.952458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.952472 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.952490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.952496 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.952514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-01 19:47:15.952525 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952531 | orchestrator | 2025-04-01 19:47:15.952537 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-01 19:47:15.952544 | orchestrator | Tuesday 01 April 2025 19:46:31 +0000 (0:00:01.392) 0:07:27.082 ********* 2025-04-01 19:47:15.952550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952576 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952619 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-01 19:47:15.952654 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952660 | orchestrator | 2025-04-01 19:47:15.952667 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-01 19:47:15.952673 | orchestrator | Tuesday 01 April 2025 19:46:32 +0000 (0:00:01.619) 0:07:28.701 ********* 2025-04-01 19:47:15.952679 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.952685 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.952692 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.952698 | orchestrator | 2025-04-01 19:47:15.952704 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-01 19:47:15.952713 | orchestrator | Tuesday 01 April 2025 19:46:34 +0000 (0:00:01.688) 0:07:30.390 ********* 2025-04-01 19:47:15.952720 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.952726 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.952732 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.952738 | orchestrator | 2025-04-01 19:47:15.952745 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-01 19:47:15.952751 | orchestrator | Tuesday 01 April 2025 19:46:37 +0000 (0:00:02.804) 0:07:33.194 ********* 2025-04-01 19:47:15.952757 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952764 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952772 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952779 | orchestrator | 2025-04-01 19:47:15.952785 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-01 19:47:15.952791 | orchestrator | Tuesday 01 April 2025 19:46:37 +0000 (0:00:00.341) 0:07:33.536 ********* 2025-04-01 19:47:15.952798 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952804 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952810 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952816 | orchestrator | 2025-04-01 19:47:15.952823 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-01 19:47:15.952829 | orchestrator | Tuesday 01 April 2025 19:46:38 +0000 (0:00:00.650) 0:07:34.186 ********* 2025-04-01 19:47:15.952835 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952842 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952848 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952854 | orchestrator | 2025-04-01 19:47:15.952860 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-01 19:47:15.952866 | orchestrator | Tuesday 01 April 2025 19:46:39 +0000 (0:00:00.601) 0:07:34.787 ********* 2025-04-01 19:47:15.952873 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952879 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952885 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952891 | orchestrator | 2025-04-01 19:47:15.952897 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-01 19:47:15.952904 | orchestrator | Tuesday 01 April 2025 19:46:39 +0000 (0:00:00.613) 0:07:35.401 ********* 2025-04-01 19:47:15.952910 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952916 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952922 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952928 | orchestrator | 2025-04-01 19:47:15.952935 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-01 19:47:15.952941 | orchestrator | Tuesday 01 April 2025 19:46:39 +0000 (0:00:00.345) 0:07:35.747 ********* 2025-04-01 19:47:15.952947 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.952953 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.952959 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.952966 | orchestrator | 2025-04-01 19:47:15.952972 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-01 19:47:15.952978 | orchestrator | Tuesday 01 April 2025 19:46:41 +0000 (0:00:01.125) 0:07:36.872 ********* 2025-04-01 19:47:15.952984 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.952991 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953000 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953006 | orchestrator | 2025-04-01 19:47:15.953013 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-01 19:47:15.953019 | orchestrator | Tuesday 01 April 2025 19:46:42 +0000 (0:00:00.971) 0:07:37.844 ********* 2025-04-01 19:47:15.953025 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953032 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953038 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953048 | orchestrator | 2025-04-01 19:47:15.953055 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-01 19:47:15.953061 | orchestrator | Tuesday 01 April 2025 19:46:42 +0000 (0:00:00.384) 0:07:38.228 ********* 2025-04-01 19:47:15.953068 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953074 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953080 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953087 | orchestrator | 2025-04-01 19:47:15.953093 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-01 19:47:15.953099 | orchestrator | Tuesday 01 April 2025 19:46:43 +0000 (0:00:01.304) 0:07:39.533 ********* 2025-04-01 19:47:15.953105 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953111 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953118 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953124 | orchestrator | 2025-04-01 19:47:15.953130 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-01 19:47:15.953136 | orchestrator | Tuesday 01 April 2025 19:46:44 +0000 (0:00:01.244) 0:07:40.778 ********* 2025-04-01 19:47:15.953142 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953148 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953155 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953161 | orchestrator | 2025-04-01 19:47:15.953167 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-01 19:47:15.953173 | orchestrator | Tuesday 01 April 2025 19:46:46 +0000 (0:00:01.118) 0:07:41.896 ********* 2025-04-01 19:47:15.953179 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.953185 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.953192 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.953198 | orchestrator | 2025-04-01 19:47:15.953204 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-01 19:47:15.953210 | orchestrator | Tuesday 01 April 2025 19:46:51 +0000 (0:00:05.802) 0:07:47.699 ********* 2025-04-01 19:47:15.953216 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953223 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953229 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953235 | orchestrator | 2025-04-01 19:47:15.953241 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-01 19:47:15.953247 | orchestrator | Tuesday 01 April 2025 19:46:54 +0000 (0:00:02.188) 0:07:49.887 ********* 2025-04-01 19:47:15.953253 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.953260 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.953266 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.953272 | orchestrator | 2025-04-01 19:47:15.953278 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-01 19:47:15.953285 | orchestrator | Tuesday 01 April 2025 19:47:00 +0000 (0:00:06.605) 0:07:56.493 ********* 2025-04-01 19:47:15.953291 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953297 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953303 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953310 | orchestrator | 2025-04-01 19:47:15.953316 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-01 19:47:15.953325 | orchestrator | Tuesday 01 April 2025 19:47:03 +0000 (0:00:02.789) 0:07:59.282 ********* 2025-04-01 19:47:15.953331 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:47:15.953337 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:47:15.953344 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:47:15.953350 | orchestrator | 2025-04-01 19:47:15.953358 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-01 19:47:15.953368 | orchestrator | Tuesday 01 April 2025 19:47:08 +0000 (0:00:05.154) 0:08:04.436 ********* 2025-04-01 19:47:15.953374 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.953381 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.953387 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.953393 | orchestrator | 2025-04-01 19:47:15.953399 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-01 19:47:15.953405 | orchestrator | Tuesday 01 April 2025 19:47:09 +0000 (0:00:00.724) 0:08:05.161 ********* 2025-04-01 19:47:15.953412 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.953418 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.953424 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.953430 | orchestrator | 2025-04-01 19:47:15.953437 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-01 19:47:15.953443 | orchestrator | Tuesday 01 April 2025 19:47:10 +0000 (0:00:00.645) 0:08:05.806 ********* 2025-04-01 19:47:15.953449 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.953455 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.953461 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.953467 | orchestrator | 2025-04-01 19:47:15.953474 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-01 19:47:15.953480 | orchestrator | Tuesday 01 April 2025 19:47:10 +0000 (0:00:00.384) 0:08:06.191 ********* 2025-04-01 19:47:15.953486 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.953492 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.953498 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.953505 | orchestrator | 2025-04-01 19:47:15.953511 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-01 19:47:15.953517 | orchestrator | Tuesday 01 April 2025 19:47:11 +0000 (0:00:00.706) 0:08:06.898 ********* 2025-04-01 19:47:15.953523 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.953529 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.953535 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.953542 | orchestrator | 2025-04-01 19:47:15.953548 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-01 19:47:15.953554 | orchestrator | Tuesday 01 April 2025 19:47:11 +0000 (0:00:00.644) 0:08:07.542 ********* 2025-04-01 19:47:15.953560 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:47:15.953567 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:47:15.953573 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:47:15.953590 | orchestrator | 2025-04-01 19:47:15.953597 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-01 19:47:15.953603 | orchestrator | Tuesday 01 April 2025 19:47:12 +0000 (0:00:00.403) 0:08:07.946 ********* 2025-04-01 19:47:15.953609 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953616 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953622 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953628 | orchestrator | 2025-04-01 19:47:15.953634 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-01 19:47:15.953640 | orchestrator | Tuesday 01 April 2025 19:47:13 +0000 (0:00:01.467) 0:08:09.414 ********* 2025-04-01 19:47:15.953647 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:47:15.953653 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:47:15.953659 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:47:15.953665 | orchestrator | 2025-04-01 19:47:15.953672 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:47:15.953678 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-01 19:47:15.953685 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-01 19:47:15.953691 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-01 19:47:15.953701 | orchestrator | 2025-04-01 19:47:15.953708 | orchestrator | 2025-04-01 19:47:15.953714 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:47:15.953720 | orchestrator | Tuesday 01 April 2025 19:47:14 +0000 (0:00:01.248) 0:08:10.662 ********* 2025-04-01 19:47:15.953726 | orchestrator | =============================================================================== 2025-04-01 19:47:15.953733 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 9.07s 2025-04-01 19:47:15.953739 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.85s 2025-04-01 19:47:15.953745 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 7.81s 2025-04-01 19:47:15.953751 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.31s 2025-04-01 19:47:15.953757 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 7.23s 2025-04-01 19:47:15.953764 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.89s 2025-04-01 19:47:15.953770 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.61s 2025-04-01 19:47:15.953776 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.61s 2025-04-01 19:47:15.953782 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.49s 2025-04-01 19:47:15.953788 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.41s 2025-04-01 19:47:15.953794 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 6.17s 2025-04-01 19:47:15.953804 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.05s 2025-04-01 19:47:15.953810 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.80s 2025-04-01 19:47:15.953816 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.80s 2025-04-01 19:47:15.953825 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.70s 2025-04-01 19:47:18.981109 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.70s 2025-04-01 19:47:18.981229 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.54s 2025-04-01 19:47:18.981249 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 5.29s 2025-04-01 19:47:18.981264 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.21s 2025-04-01 19:47:18.981278 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.15s 2025-04-01 19:47:18.981309 | orchestrator | 2025-04-01 19:47:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:18.983660 | orchestrator | 2025-04-01 19:47:18 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:18.984957 | orchestrator | 2025-04-01 19:47:18 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:18.986630 | orchestrator | 2025-04-01 19:47:18 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:18.986721 | orchestrator | 2025-04-01 19:47:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:22.048108 | orchestrator | 2025-04-01 19:47:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:22.051572 | orchestrator | 2025-04-01 19:47:22 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:22.052190 | orchestrator | 2025-04-01 19:47:22 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:22.053720 | orchestrator | 2025-04-01 19:47:22 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:25.096335 | orchestrator | 2025-04-01 19:47:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:25.096579 | orchestrator | 2025-04-01 19:47:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:25.098303 | orchestrator | 2025-04-01 19:47:25 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:25.098341 | orchestrator | 2025-04-01 19:47:25 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:25.099517 | orchestrator | 2025-04-01 19:47:25 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:28.147781 | orchestrator | 2025-04-01 19:47:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:28.147920 | orchestrator | 2025-04-01 19:47:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:28.150010 | orchestrator | 2025-04-01 19:47:28 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:28.150098 | orchestrator | 2025-04-01 19:47:28 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:28.150695 | orchestrator | 2025-04-01 19:47:28 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:31.185972 | orchestrator | 2025-04-01 19:47:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:31.186108 | orchestrator | 2025-04-01 19:47:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:31.186632 | orchestrator | 2025-04-01 19:47:31 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:31.186667 | orchestrator | 2025-04-01 19:47:31 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:31.187275 | orchestrator | 2025-04-01 19:47:31 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:31.187392 | orchestrator | 2025-04-01 19:47:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:34.223135 | orchestrator | 2025-04-01 19:47:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:34.224218 | orchestrator | 2025-04-01 19:47:34 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:34.224259 | orchestrator | 2025-04-01 19:47:34 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:34.227651 | orchestrator | 2025-04-01 19:47:34 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:37.259797 | orchestrator | 2025-04-01 19:47:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:37.259968 | orchestrator | 2025-04-01 19:47:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:37.260556 | orchestrator | 2025-04-01 19:47:37 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:37.264207 | orchestrator | 2025-04-01 19:47:37 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:37.268853 | orchestrator | 2025-04-01 19:47:37 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:40.312154 | orchestrator | 2025-04-01 19:47:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:40.312290 | orchestrator | 2025-04-01 19:47:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:40.313778 | orchestrator | 2025-04-01 19:47:40 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:40.317578 | orchestrator | 2025-04-01 19:47:40 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:40.320028 | orchestrator | 2025-04-01 19:47:40 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:43.368128 | orchestrator | 2025-04-01 19:47:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:43.368265 | orchestrator | 2025-04-01 19:47:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:43.368627 | orchestrator | 2025-04-01 19:47:43 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:43.369329 | orchestrator | 2025-04-01 19:47:43 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:43.370189 | orchestrator | 2025-04-01 19:47:43 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:46.433036 | orchestrator | 2025-04-01 19:47:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:46.433164 | orchestrator | 2025-04-01 19:47:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:46.437224 | orchestrator | 2025-04-01 19:47:46 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:46.437950 | orchestrator | 2025-04-01 19:47:46 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:46.441107 | orchestrator | 2025-04-01 19:47:46 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:49.490276 | orchestrator | 2025-04-01 19:47:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:49.490414 | orchestrator | 2025-04-01 19:47:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:49.490820 | orchestrator | 2025-04-01 19:47:49 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:49.491746 | orchestrator | 2025-04-01 19:47:49 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:49.492934 | orchestrator | 2025-04-01 19:47:49 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:52.556335 | orchestrator | 2025-04-01 19:47:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:52.556522 | orchestrator | 2025-04-01 19:47:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:52.558357 | orchestrator | 2025-04-01 19:47:52 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:52.561118 | orchestrator | 2025-04-01 19:47:52 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:52.565956 | orchestrator | 2025-04-01 19:47:52 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:52.568762 | orchestrator | 2025-04-01 19:47:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:55.613666 | orchestrator | 2025-04-01 19:47:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:55.616160 | orchestrator | 2025-04-01 19:47:55 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:55.621972 | orchestrator | 2025-04-01 19:47:55 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:55.624062 | orchestrator | 2025-04-01 19:47:55 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:58.669971 | orchestrator | 2025-04-01 19:47:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:47:58.670180 | orchestrator | 2025-04-01 19:47:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:47:58.672378 | orchestrator | 2025-04-01 19:47:58 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:47:58.674615 | orchestrator | 2025-04-01 19:47:58 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:47:58.677251 | orchestrator | 2025-04-01 19:47:58 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:47:58.677828 | orchestrator | 2025-04-01 19:47:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:01.741113 | orchestrator | 2025-04-01 19:48:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:01.744844 | orchestrator | 2025-04-01 19:48:01 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:01.746760 | orchestrator | 2025-04-01 19:48:01 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:01.749148 | orchestrator | 2025-04-01 19:48:01 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:04.797869 | orchestrator | 2025-04-01 19:48:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:04.798079 | orchestrator | 2025-04-01 19:48:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:04.798915 | orchestrator | 2025-04-01 19:48:04 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:04.800255 | orchestrator | 2025-04-01 19:48:04 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:04.802183 | orchestrator | 2025-04-01 19:48:04 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:07.852385 | orchestrator | 2025-04-01 19:48:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:07.852546 | orchestrator | 2025-04-01 19:48:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:07.854057 | orchestrator | 2025-04-01 19:48:07 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:07.856250 | orchestrator | 2025-04-01 19:48:07 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:07.858122 | orchestrator | 2025-04-01 19:48:07 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:10.918218 | orchestrator | 2025-04-01 19:48:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:10.918383 | orchestrator | 2025-04-01 19:48:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:10.918741 | orchestrator | 2025-04-01 19:48:10 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:10.920750 | orchestrator | 2025-04-01 19:48:10 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:10.922712 | orchestrator | 2025-04-01 19:48:10 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:10.923188 | orchestrator | 2025-04-01 19:48:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:13.985954 | orchestrator | 2025-04-01 19:48:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:17.041172 | orchestrator | 2025-04-01 19:48:13 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:17.041305 | orchestrator | 2025-04-01 19:48:13 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:17.041330 | orchestrator | 2025-04-01 19:48:13 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:17.041353 | orchestrator | 2025-04-01 19:48:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:17.041528 | orchestrator | 2025-04-01 19:48:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:17.043276 | orchestrator | 2025-04-01 19:48:17 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:17.043310 | orchestrator | 2025-04-01 19:48:17 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:17.044143 | orchestrator | 2025-04-01 19:48:17 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:20.098287 | orchestrator | 2025-04-01 19:48:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:20.098455 | orchestrator | 2025-04-01 19:48:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:20.106085 | orchestrator | 2025-04-01 19:48:20 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:23.157132 | orchestrator | 2025-04-01 19:48:20 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:23.157276 | orchestrator | 2025-04-01 19:48:20 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:23.157295 | orchestrator | 2025-04-01 19:48:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:23.157330 | orchestrator | 2025-04-01 19:48:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:23.157926 | orchestrator | 2025-04-01 19:48:23 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:23.159503 | orchestrator | 2025-04-01 19:48:23 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:23.161909 | orchestrator | 2025-04-01 19:48:23 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:26.208690 | orchestrator | 2025-04-01 19:48:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:26.208862 | orchestrator | 2025-04-01 19:48:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:26.209829 | orchestrator | 2025-04-01 19:48:26 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:26.210738 | orchestrator | 2025-04-01 19:48:26 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:26.211985 | orchestrator | 2025-04-01 19:48:26 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:29.266752 | orchestrator | 2025-04-01 19:48:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:29.266885 | orchestrator | 2025-04-01 19:48:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:29.267691 | orchestrator | 2025-04-01 19:48:29 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:29.268893 | orchestrator | 2025-04-01 19:48:29 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:29.270382 | orchestrator | 2025-04-01 19:48:29 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:29.270973 | orchestrator | 2025-04-01 19:48:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:32.335702 | orchestrator | 2025-04-01 19:48:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:32.337423 | orchestrator | 2025-04-01 19:48:32 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:32.339642 | orchestrator | 2025-04-01 19:48:32 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:32.340875 | orchestrator | 2025-04-01 19:48:32 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:32.340994 | orchestrator | 2025-04-01 19:48:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:35.391681 | orchestrator | 2025-04-01 19:48:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:35.395301 | orchestrator | 2025-04-01 19:48:35 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:35.397908 | orchestrator | 2025-04-01 19:48:35 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:35.403979 | orchestrator | 2025-04-01 19:48:35 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:38.456253 | orchestrator | 2025-04-01 19:48:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:38.456380 | orchestrator | 2025-04-01 19:48:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:38.459001 | orchestrator | 2025-04-01 19:48:38 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:38.461267 | orchestrator | 2025-04-01 19:48:38 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:38.463950 | orchestrator | 2025-04-01 19:48:38 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:41.512531 | orchestrator | 2025-04-01 19:48:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:41.512751 | orchestrator | 2025-04-01 19:48:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:41.513767 | orchestrator | 2025-04-01 19:48:41 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:41.515179 | orchestrator | 2025-04-01 19:48:41 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:41.516319 | orchestrator | 2025-04-01 19:48:41 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:44.553083 | orchestrator | 2025-04-01 19:48:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:44.553246 | orchestrator | 2025-04-01 19:48:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:44.554397 | orchestrator | 2025-04-01 19:48:44 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:44.554432 | orchestrator | 2025-04-01 19:48:44 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:44.555057 | orchestrator | 2025-04-01 19:48:44 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:47.595559 | orchestrator | 2025-04-01 19:48:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:47.595774 | orchestrator | 2025-04-01 19:48:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:47.597230 | orchestrator | 2025-04-01 19:48:47 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:47.600727 | orchestrator | 2025-04-01 19:48:47 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:47.600923 | orchestrator | 2025-04-01 19:48:47 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:47.601075 | orchestrator | 2025-04-01 19:48:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:50.643887 | orchestrator | 2025-04-01 19:48:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:50.644751 | orchestrator | 2025-04-01 19:48:50 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:50.645143 | orchestrator | 2025-04-01 19:48:50 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:50.646111 | orchestrator | 2025-04-01 19:48:50 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:53.701367 | orchestrator | 2025-04-01 19:48:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:53.701562 | orchestrator | 2025-04-01 19:48:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:53.701981 | orchestrator | 2025-04-01 19:48:53 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:53.702531 | orchestrator | 2025-04-01 19:48:53 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:53.703508 | orchestrator | 2025-04-01 19:48:53 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:56.749087 | orchestrator | 2025-04-01 19:48:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:56.749221 | orchestrator | 2025-04-01 19:48:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:56.751134 | orchestrator | 2025-04-01 19:48:56 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:56.752275 | orchestrator | 2025-04-01 19:48:56 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:56.754197 | orchestrator | 2025-04-01 19:48:56 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:59.801824 | orchestrator | 2025-04-01 19:48:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:48:59.801946 | orchestrator | 2025-04-01 19:48:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:48:59.804170 | orchestrator | 2025-04-01 19:48:59 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:48:59.806547 | orchestrator | 2025-04-01 19:48:59 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:48:59.808375 | orchestrator | 2025-04-01 19:48:59 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:48:59.808777 | orchestrator | 2025-04-01 19:48:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:02.866678 | orchestrator | 2025-04-01 19:49:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:02.868438 | orchestrator | 2025-04-01 19:49:02 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:02.872174 | orchestrator | 2025-04-01 19:49:02 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:49:02.874304 | orchestrator | 2025-04-01 19:49:02 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:02.874655 | orchestrator | 2025-04-01 19:49:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:05.919356 | orchestrator | 2025-04-01 19:49:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:05.922077 | orchestrator | 2025-04-01 19:49:05 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:05.923444 | orchestrator | 2025-04-01 19:49:05 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state STARTED 2025-04-01 19:49:05.923479 | orchestrator | 2025-04-01 19:49:05 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:08.981328 | orchestrator | 2025-04-01 19:49:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:08.981454 | orchestrator | 2025-04-01 19:49:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:08.983894 | orchestrator | 2025-04-01 19:49:08 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:08.984818 | orchestrator | 2025-04-01 19:49:08 | INFO  | Task 6fc07cac-d8cf-4be0-88b8-cba37a8c0012 is in state SUCCESS 2025-04-01 19:49:08.986876 | orchestrator | 2025-04-01 19:49:08.986914 | orchestrator | 2025-04-01 19:49:08.987074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:49:08.987091 | orchestrator | 2025-04-01 19:49:08.987105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:49:08.987118 | orchestrator | Tuesday 01 April 2025 19:47:19 +0000 (0:00:00.344) 0:00:00.344 ********* 2025-04-01 19:49:08.987131 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:49:08.987147 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:49:08.987169 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:49:08.987182 | orchestrator | 2025-04-01 19:49:08.987196 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:49:08.987209 | orchestrator | Tuesday 01 April 2025 19:47:19 +0000 (0:00:00.436) 0:00:00.781 ********* 2025-04-01 19:49:08.987223 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-04-01 19:49:08.987237 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-04-01 19:49:08.987250 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-04-01 19:49:08.987263 | orchestrator | 2025-04-01 19:49:08.987276 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-04-01 19:49:08.987289 | orchestrator | 2025-04-01 19:49:08.987301 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-01 19:49:08.987314 | orchestrator | Tuesday 01 April 2025 19:47:20 +0000 (0:00:00.332) 0:00:01.113 ********* 2025-04-01 19:49:08.987327 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:49:08.987340 | orchestrator | 2025-04-01 19:49:08.987353 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-04-01 19:49:08.987366 | orchestrator | Tuesday 01 April 2025 19:47:20 +0000 (0:00:00.808) 0:00:01.922 ********* 2025-04-01 19:49:08.987379 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-01 19:49:08.987392 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-01 19:49:08.987405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-01 19:49:08.987418 | orchestrator | 2025-04-01 19:49:08.987431 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-04-01 19:49:08.987444 | orchestrator | Tuesday 01 April 2025 19:47:21 +0000 (0:00:00.881) 0:00:02.803 ********* 2025-04-01 19:49:08.987460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.987505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.987541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.987661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.987682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.987710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.987732 | orchestrator | 2025-04-01 19:49:08.987746 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-01 19:49:08.987758 | orchestrator | Tuesday 01 April 2025 19:47:23 +0000 (0:00:01.846) 0:00:04.650 ********* 2025-04-01 19:49:08.987771 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:49:08.987784 | orchestrator | 2025-04-01 19:49:08.987796 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-04-01 19:49:08.987809 | orchestrator | Tuesday 01 April 2025 19:47:24 +0000 (0:00:00.911) 0:00:05.562 ********* 2025-04-01 19:49:08.987833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.987848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.987862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.987875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.987911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.987926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.987948 | orchestrator | 2025-04-01 19:49:08.987961 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-04-01 19:49:08.987974 | orchestrator | Tuesday 01 April 2025 19:47:27 +0000 (0:00:03.455) 0:00:09.017 ********* 2025-04-01 19:49:08.987987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:49:08.988008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:49:08.988027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:49:08.988042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:49:08.988060 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:49:08.988074 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:49:08.988087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:49:08.988112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:49:08.988126 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:49:08.988139 | orchestrator | 2025-04-01 19:49:08.988152 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-04-01 19:49:08.988169 | orchestrator | Tuesday 01 April 2025 19:47:29 +0000 (0:00:01.154) 0:00:10.172 ********* 2025-04-01 19:49:08.988188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:49:08.988203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:49:08.988216 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:49:08.988229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:49:08.988261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:49:08.988277 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:49:08.988298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-01 19:49:08.988313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-01 19:49:08.988328 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:49:08.988341 | orchestrator | 2025-04-01 19:49:08.988355 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-04-01 19:49:08.988369 | orchestrator | Tuesday 01 April 2025 19:47:30 +0000 (0:00:01.540) 0:00:11.712 ********* 2025-04-01 19:49:08.988383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.988413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.988429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.988452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.988476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.988497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.988511 | orchestrator | 2025-04-01 19:49:08.988525 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-04-01 19:49:08.988540 | orchestrator | Tuesday 01 April 2025 19:47:33 +0000 (0:00:02.840) 0:00:14.553 ********* 2025-04-01 19:49:08.988554 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:49:08.988568 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:49:08.988582 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:49:08.988596 | orchestrator | 2025-04-01 19:49:08.988627 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-04-01 19:49:08.988641 | orchestrator | Tuesday 01 April 2025 19:47:37 +0000 (0:00:04.167) 0:00:18.721 ********* 2025-04-01 19:49:08.988654 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:49:08.988667 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:49:08.988679 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:49:08.988691 | orchestrator | 2025-04-01 19:49:08.988704 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-04-01 19:49:08.988717 | orchestrator | Tuesday 01 April 2025 19:47:39 +0000 (0:00:02.279) 0:00:21.000 ********* 2025-04-01 19:49:08.988737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.988751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.988771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-01 19:49:08.988792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.988813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.988827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-01 19:49:08.988846 | orchestrator | 2025-04-01 19:49:08.988859 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-01 19:49:08.988872 | orchestrator | Tuesday 01 April 2025 19:47:43 +0000 (0:00:03.493) 0:00:24.493 ********* 2025-04-01 19:49:08.988884 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:49:08.988897 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:49:08.988909 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:49:08.988922 | orchestrator | 2025-04-01 19:49:08.988935 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-01 19:49:08.988948 | orchestrator | Tuesday 01 April 2025 19:47:44 +0000 (0:00:00.579) 0:00:25.072 ********* 2025-04-01 19:49:08.988960 | orchestrator | 2025-04-01 19:49:08.988973 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-01 19:49:08.988985 | orchestrator | Tuesday 01 April 2025 19:47:44 +0000 (0:00:00.451) 0:00:25.524 ********* 2025-04-01 19:49:08.988998 | orchestrator | 2025-04-01 19:49:08.989011 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-01 19:49:08.989023 | orchestrator | Tuesday 01 April 2025 19:47:44 +0000 (0:00:00.120) 0:00:25.644 ********* 2025-04-01 19:49:08.989036 | orchestrator | 2025-04-01 19:49:08.989048 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-04-01 19:49:08.989061 | orchestrator | Tuesday 01 April 2025 19:47:44 +0000 (0:00:00.193) 0:00:25.838 ********* 2025-04-01 19:49:08.989074 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:49:08.989086 | orchestrator | 2025-04-01 19:49:08.989099 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-04-01 19:49:08.989114 | orchestrator | Tuesday 01 April 2025 19:47:45 +0000 (0:00:00.322) 0:00:26.161 ********* 2025-04-01 19:49:08.989126 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:49:08.989139 | orchestrator | 2025-04-01 19:49:08.989153 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-04-01 19:49:08.989166 | orchestrator | Tuesday 01 April 2025 19:47:46 +0000 (0:00:00.894) 0:00:27.056 ********* 2025-04-01 19:49:08.989178 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:49:08.989191 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:49:08.989203 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:49:08.989216 | orchestrator | 2025-04-01 19:49:08.989229 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-04-01 19:49:08.989241 | orchestrator | Tuesday 01 April 2025 19:48:14 +0000 (0:00:28.117) 0:00:55.174 ********* 2025-04-01 19:49:08.989254 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:49:08.989266 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:49:08.989279 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:49:08.989292 | orchestrator | 2025-04-01 19:49:08.989304 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-01 19:49:08.989316 | orchestrator | Tuesday 01 April 2025 19:48:53 +0000 (0:00:39.691) 0:01:34.866 ********* 2025-04-01 19:49:08.989329 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:49:08.989341 | orchestrator | 2025-04-01 19:49:08.989354 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-04-01 19:49:08.989366 | orchestrator | Tuesday 01 April 2025 19:48:54 +0000 (0:00:01.017) 0:01:35.883 ********* 2025-04-01 19:49:08.989379 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:49:08.989391 | orchestrator | 2025-04-01 19:49:08.989404 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-04-01 19:49:08.989422 | orchestrator | Tuesday 01 April 2025 19:48:57 +0000 (0:00:02.959) 0:01:38.842 ********* 2025-04-01 19:49:08.989434 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:49:08.989447 | orchestrator | 2025-04-01 19:49:08.989459 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-04-01 19:49:08.989476 | orchestrator | Tuesday 01 April 2025 19:49:00 +0000 (0:00:02.415) 0:01:41.258 ********* 2025-04-01 19:49:08.989489 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:49:08.989502 | orchestrator | 2025-04-01 19:49:08.989514 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-04-01 19:49:08.989527 | orchestrator | Tuesday 01 April 2025 19:49:03 +0000 (0:00:02.898) 0:01:44.157 ********* 2025-04-01 19:49:08.989539 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:49:08.989552 | orchestrator | 2025-04-01 19:49:08.989569 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:49:12.052654 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:49:12.052769 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:49:12.052787 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-01 19:49:12.052802 | orchestrator | 2025-04-01 19:49:12.052818 | orchestrator | 2025-04-01 19:49:12.052832 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:49:12.052848 | orchestrator | Tuesday 01 April 2025 19:49:06 +0000 (0:00:02.908) 0:01:47.065 ********* 2025-04-01 19:49:12.052863 | orchestrator | =============================================================================== 2025-04-01 19:49:12.052877 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 39.69s 2025-04-01 19:49:12.052891 | orchestrator | opensearch : Restart opensearch container ------------------------------ 28.12s 2025-04-01 19:49:12.052905 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.17s 2025-04-01 19:49:12.052919 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.49s 2025-04-01 19:49:12.052934 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.46s 2025-04-01 19:49:12.052948 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.96s 2025-04-01 19:49:12.052962 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.91s 2025-04-01 19:49:12.052976 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.90s 2025-04-01 19:49:12.052991 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.84s 2025-04-01 19:49:12.053005 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.42s 2025-04-01 19:49:12.053019 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.28s 2025-04-01 19:49:12.053033 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.85s 2025-04-01 19:49:12.053047 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.54s 2025-04-01 19:49:12.053062 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.15s 2025-04-01 19:49:12.053078 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.02s 2025-04-01 19:49:12.053092 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.91s 2025-04-01 19:49:12.053106 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.89s 2025-04-01 19:49:12.053120 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.88s 2025-04-01 19:49:12.053134 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.81s 2025-04-01 19:49:12.053148 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.77s 2025-04-01 19:49:12.053191 | orchestrator | 2025-04-01 19:49:08 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:12.053209 | orchestrator | 2025-04-01 19:49:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:12.053242 | orchestrator | 2025-04-01 19:49:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:12.053759 | orchestrator | 2025-04-01 19:49:12 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:12.056104 | orchestrator | 2025-04-01 19:49:12 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:15.108243 | orchestrator | 2025-04-01 19:49:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:15.108387 | orchestrator | 2025-04-01 19:49:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:15.112724 | orchestrator | 2025-04-01 19:49:15 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:15.114575 | orchestrator | 2025-04-01 19:49:15 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:15.114819 | orchestrator | 2025-04-01 19:49:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:18.155170 | orchestrator | 2025-04-01 19:49:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:18.156794 | orchestrator | 2025-04-01 19:49:18 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:18.158145 | orchestrator | 2025-04-01 19:49:18 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:18.158371 | orchestrator | 2025-04-01 19:49:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:21.198410 | orchestrator | 2025-04-01 19:49:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:21.199382 | orchestrator | 2025-04-01 19:49:21 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:21.200092 | orchestrator | 2025-04-01 19:49:21 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:24.243375 | orchestrator | 2025-04-01 19:49:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:24.243519 | orchestrator | 2025-04-01 19:49:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:24.245811 | orchestrator | 2025-04-01 19:49:24 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:27.295678 | orchestrator | 2025-04-01 19:49:24 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:27.295803 | orchestrator | 2025-04-01 19:49:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:27.295839 | orchestrator | 2025-04-01 19:49:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:27.297527 | orchestrator | 2025-04-01 19:49:27 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:30.357253 | orchestrator | 2025-04-01 19:49:27 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:30.357373 | orchestrator | 2025-04-01 19:49:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:30.357411 | orchestrator | 2025-04-01 19:49:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:30.360124 | orchestrator | 2025-04-01 19:49:30 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:30.362973 | orchestrator | 2025-04-01 19:49:30 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:30.363202 | orchestrator | 2025-04-01 19:49:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:33.407650 | orchestrator | 2025-04-01 19:49:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:33.408009 | orchestrator | 2025-04-01 19:49:33 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:33.408971 | orchestrator | 2025-04-01 19:49:33 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:36.460330 | orchestrator | 2025-04-01 19:49:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:36.460470 | orchestrator | 2025-04-01 19:49:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:36.463170 | orchestrator | 2025-04-01 19:49:36 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:36.466682 | orchestrator | 2025-04-01 19:49:36 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:36.466931 | orchestrator | 2025-04-01 19:49:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:39.523009 | orchestrator | 2025-04-01 19:49:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:39.524746 | orchestrator | 2025-04-01 19:49:39 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:39.526775 | orchestrator | 2025-04-01 19:49:39 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:42.592798 | orchestrator | 2025-04-01 19:49:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:42.592960 | orchestrator | 2025-04-01 19:49:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:42.594130 | orchestrator | 2025-04-01 19:49:42 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:42.596118 | orchestrator | 2025-04-01 19:49:42 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:45.661205 | orchestrator | 2025-04-01 19:49:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:45.661343 | orchestrator | 2025-04-01 19:49:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:45.663045 | orchestrator | 2025-04-01 19:49:45 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:45.664491 | orchestrator | 2025-04-01 19:49:45 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:45.665121 | orchestrator | 2025-04-01 19:49:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:48.739809 | orchestrator | 2025-04-01 19:49:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:48.741863 | orchestrator | 2025-04-01 19:49:48 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:48.742873 | orchestrator | 2025-04-01 19:49:48 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:51.791917 | orchestrator | 2025-04-01 19:49:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:51.792035 | orchestrator | 2025-04-01 19:49:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:51.794801 | orchestrator | 2025-04-01 19:49:51 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:51.796803 | orchestrator | 2025-04-01 19:49:51 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:54.850129 | orchestrator | 2025-04-01 19:49:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:54.850304 | orchestrator | 2025-04-01 19:49:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:54.851128 | orchestrator | 2025-04-01 19:49:54 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:54.852752 | orchestrator | 2025-04-01 19:49:54 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:49:57.898346 | orchestrator | 2025-04-01 19:49:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:49:57.898523 | orchestrator | 2025-04-01 19:49:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:49:57.902336 | orchestrator | 2025-04-01 19:49:57 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:49:57.905004 | orchestrator | 2025-04-01 19:49:57 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:00.944282 | orchestrator | 2025-04-01 19:49:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:00.944448 | orchestrator | 2025-04-01 19:50:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:00.945778 | orchestrator | 2025-04-01 19:50:00 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:50:00.947058 | orchestrator | 2025-04-01 19:50:00 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:03.984072 | orchestrator | 2025-04-01 19:50:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:03.984233 | orchestrator | 2025-04-01 19:50:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:03.984457 | orchestrator | 2025-04-01 19:50:03 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:50:03.985241 | orchestrator | 2025-04-01 19:50:03 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:07.041522 | orchestrator | 2025-04-01 19:50:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:07.041649 | orchestrator | 2025-04-01 19:50:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:07.044343 | orchestrator | 2025-04-01 19:50:07 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state STARTED 2025-04-01 19:50:07.046102 | orchestrator | 2025-04-01 19:50:07 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:10.094188 | orchestrator | 2025-04-01 19:50:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:10.094318 | orchestrator | 2025-04-01 19:50:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:10.098441 | orchestrator | 2025-04-01 19:50:10 | INFO  | Task 84e4b19a-049a-43ef-8cfa-b09bc7a39fbb is in state SUCCESS 2025-04-01 19:50:10.099558 | orchestrator | 2025-04-01 19:50:10.099595 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-01 19:50:10.099610 | orchestrator | 2025-04-01 19:50:10.099671 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-04-01 19:50:10.099688 | orchestrator | 2025-04-01 19:50:10.099702 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-01 19:50:10.099717 | orchestrator | Tuesday 01 April 2025 19:36:14 +0000 (0:00:02.449) 0:00:02.449 ********* 2025-04-01 19:50:10.099733 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.099749 | orchestrator | 2025-04-01 19:50:10.099792 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-01 19:50:10.099895 | orchestrator | Tuesday 01 April 2025 19:36:16 +0000 (0:00:01.699) 0:00:04.148 ********* 2025-04-01 19:50:10.099917 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.099932 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-01 19:50:10.100040 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-01 19:50:10.100056 | orchestrator | 2025-04-01 19:50:10.100070 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-01 19:50:10.100084 | orchestrator | Tuesday 01 April 2025 19:36:17 +0000 (0:00:00.932) 0:00:05.081 ********* 2025-04-01 19:50:10.100100 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.100114 | orchestrator | 2025-04-01 19:50:10.100128 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-01 19:50:10.100142 | orchestrator | Tuesday 01 April 2025 19:36:19 +0000 (0:00:01.695) 0:00:06.776 ********* 2025-04-01 19:50:10.100917 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.100946 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.100961 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.100975 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.100989 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.101032 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.101047 | orchestrator | 2025-04-01 19:50:10.101351 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-01 19:50:10.101866 | orchestrator | Tuesday 01 April 2025 19:36:21 +0000 (0:00:02.024) 0:00:08.801 ********* 2025-04-01 19:50:10.101890 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.101960 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.101976 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.101990 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.102004 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.102063 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.102080 | orchestrator | 2025-04-01 19:50:10.102095 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-01 19:50:10.102109 | orchestrator | Tuesday 01 April 2025 19:36:22 +0000 (0:00:01.285) 0:00:10.086 ********* 2025-04-01 19:50:10.102123 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.102137 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.102151 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.102165 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.102178 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.102192 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.102206 | orchestrator | 2025-04-01 19:50:10.102220 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-01 19:50:10.102234 | orchestrator | Tuesday 01 April 2025 19:36:24 +0000 (0:00:01.645) 0:00:11.732 ********* 2025-04-01 19:50:10.102248 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.102263 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.102277 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.102290 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.102304 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.102329 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.102343 | orchestrator | 2025-04-01 19:50:10.102357 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-01 19:50:10.102372 | orchestrator | Tuesday 01 April 2025 19:36:25 +0000 (0:00:01.203) 0:00:12.935 ********* 2025-04-01 19:50:10.102386 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.102400 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.102414 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.102430 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.102446 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.102461 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.102476 | orchestrator | 2025-04-01 19:50:10.102492 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-01 19:50:10.102522 | orchestrator | Tuesday 01 April 2025 19:36:26 +0000 (0:00:01.020) 0:00:13.956 ********* 2025-04-01 19:50:10.102537 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.102553 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.102569 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.102585 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.103208 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.103241 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.103256 | orchestrator | 2025-04-01 19:50:10.103270 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-01 19:50:10.103285 | orchestrator | Tuesday 01 April 2025 19:36:27 +0000 (0:00:01.076) 0:00:15.033 ********* 2025-04-01 19:50:10.103300 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.103387 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.103791 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.103810 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.103860 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.103875 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.105216 | orchestrator | 2025-04-01 19:50:10.105235 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-01 19:50:10.105250 | orchestrator | Tuesday 01 April 2025 19:36:28 +0000 (0:00:01.461) 0:00:16.494 ********* 2025-04-01 19:50:10.105264 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.105278 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.105292 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.105306 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.105320 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.105334 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.105348 | orchestrator | 2025-04-01 19:50:10.105455 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-01 19:50:10.105476 | orchestrator | Tuesday 01 April 2025 19:36:30 +0000 (0:00:01.157) 0:00:17.652 ********* 2025-04-01 19:50:10.105491 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.105506 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:50:10.105520 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:50:10.105534 | orchestrator | 2025-04-01 19:50:10.105548 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-01 19:50:10.105562 | orchestrator | Tuesday 01 April 2025 19:36:31 +0000 (0:00:01.014) 0:00:18.666 ********* 2025-04-01 19:50:10.105576 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.105590 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.105604 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.105618 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.105653 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.105667 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.105681 | orchestrator | 2025-04-01 19:50:10.105695 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-01 19:50:10.105710 | orchestrator | Tuesday 01 April 2025 19:36:33 +0000 (0:00:02.441) 0:00:21.108 ********* 2025-04-01 19:50:10.105724 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.105738 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:50:10.105752 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:50:10.105766 | orchestrator | 2025-04-01 19:50:10.105780 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-01 19:50:10.105794 | orchestrator | Tuesday 01 April 2025 19:36:37 +0000 (0:00:03.565) 0:00:24.673 ********* 2025-04-01 19:50:10.105808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.105823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.105836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.105864 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.105879 | orchestrator | 2025-04-01 19:50:10.105893 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-01 19:50:10.105914 | orchestrator | Tuesday 01 April 2025 19:36:38 +0000 (0:00:01.566) 0:00:26.239 ********* 2025-04-01 19:50:10.105929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.105946 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.105961 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.105975 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.105989 | orchestrator | 2025-04-01 19:50:10.106003 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-01 19:50:10.106069 | orchestrator | Tuesday 01 April 2025 19:36:40 +0000 (0:00:01.827) 0:00:28.067 ********* 2025-04-01 19:50:10.106092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.106109 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.106126 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.106142 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.106158 | orchestrator | 2025-04-01 19:50:10.106174 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-01 19:50:10.106277 | orchestrator | Tuesday 01 April 2025 19:36:40 +0000 (0:00:00.447) 0:00:28.515 ********* 2025-04-01 19:50:10.106302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-01 19:36:34.512106', 'end': '2025-04-01 19:36:34.695220', 'delta': '0:00:00.183114', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.106323 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-01 19:36:35.682596', 'end': '2025-04-01 19:36:35.879496', 'delta': '0:00:00.196900', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.106349 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-01 19:36:36.571478', 'end': '2025-04-01 19:36:36.809689', 'delta': '0:00:00.238211', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-01 19:50:10.106365 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.106380 | orchestrator | 2025-04-01 19:50:10.106396 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-01 19:50:10.106411 | orchestrator | Tuesday 01 April 2025 19:36:41 +0000 (0:00:00.609) 0:00:29.124 ********* 2025-04-01 19:50:10.106427 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.106441 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.106455 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.106469 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.106483 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.106497 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.106511 | orchestrator | 2025-04-01 19:50:10.106525 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-01 19:50:10.106539 | orchestrator | Tuesday 01 April 2025 19:36:45 +0000 (0:00:03.513) 0:00:32.638 ********* 2025-04-01 19:50:10.106553 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.106566 | orchestrator | 2025-04-01 19:50:10.106580 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-01 19:50:10.106594 | orchestrator | Tuesday 01 April 2025 19:36:45 +0000 (0:00:00.750) 0:00:33.389 ********* 2025-04-01 19:50:10.106608 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.106781 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.107006 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.107032 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.107046 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.107059 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.107072 | orchestrator | 2025-04-01 19:50:10.107087 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-01 19:50:10.107100 | orchestrator | Tuesday 01 April 2025 19:36:47 +0000 (0:00:01.318) 0:00:34.707 ********* 2025-04-01 19:50:10.107112 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107126 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.107139 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.107168 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.107181 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.107193 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.107205 | orchestrator | 2025-04-01 19:50:10.107218 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-01 19:50:10.107230 | orchestrator | Tuesday 01 April 2025 19:36:48 +0000 (0:00:01.560) 0:00:36.268 ********* 2025-04-01 19:50:10.107243 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107255 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.107267 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.107280 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.107292 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.107324 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.107337 | orchestrator | 2025-04-01 19:50:10.107350 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-01 19:50:10.107363 | orchestrator | Tuesday 01 April 2025 19:36:49 +0000 (0:00:01.241) 0:00:37.510 ********* 2025-04-01 19:50:10.107581 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107606 | orchestrator | 2025-04-01 19:50:10.107620 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-01 19:50:10.107661 | orchestrator | Tuesday 01 April 2025 19:36:50 +0000 (0:00:00.173) 0:00:37.684 ********* 2025-04-01 19:50:10.107674 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107686 | orchestrator | 2025-04-01 19:50:10.107699 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-01 19:50:10.107711 | orchestrator | Tuesday 01 April 2025 19:36:50 +0000 (0:00:00.273) 0:00:37.957 ********* 2025-04-01 19:50:10.107724 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107736 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.107749 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.107761 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.107774 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.107786 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.107799 | orchestrator | 2025-04-01 19:50:10.107811 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-01 19:50:10.107824 | orchestrator | Tuesday 01 April 2025 19:36:51 +0000 (0:00:00.870) 0:00:38.828 ********* 2025-04-01 19:50:10.107837 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107849 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.107862 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.107874 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.107887 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.107899 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.107912 | orchestrator | 2025-04-01 19:50:10.107924 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-01 19:50:10.107937 | orchestrator | Tuesday 01 April 2025 19:36:52 +0000 (0:00:01.649) 0:00:40.477 ********* 2025-04-01 19:50:10.107950 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.107962 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.107974 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.107987 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.107999 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.108011 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.108024 | orchestrator | 2025-04-01 19:50:10.108036 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-01 19:50:10.108049 | orchestrator | Tuesday 01 April 2025 19:36:53 +0000 (0:00:01.008) 0:00:41.486 ********* 2025-04-01 19:50:10.108062 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.108074 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.108087 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.108099 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.108111 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.108124 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.108137 | orchestrator | 2025-04-01 19:50:10.108149 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-01 19:50:10.108164 | orchestrator | Tuesday 01 April 2025 19:36:55 +0000 (0:00:01.333) 0:00:42.819 ********* 2025-04-01 19:50:10.108177 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.108191 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.108204 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.108218 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.108231 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.108245 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.108258 | orchestrator | 2025-04-01 19:50:10.108272 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-01 19:50:10.108296 | orchestrator | Tuesday 01 April 2025 19:36:56 +0000 (0:00:01.017) 0:00:43.836 ********* 2025-04-01 19:50:10.108310 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.108324 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.108337 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.108351 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.108364 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.108377 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.108390 | orchestrator | 2025-04-01 19:50:10.108417 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-01 19:50:10.108431 | orchestrator | Tuesday 01 April 2025 19:36:57 +0000 (0:00:01.701) 0:00:45.538 ********* 2025-04-01 19:50:10.108446 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.108459 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.108473 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.108487 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.108501 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.108521 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.108535 | orchestrator | 2025-04-01 19:50:10.108548 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-01 19:50:10.108561 | orchestrator | Tuesday 01 April 2025 19:36:58 +0000 (0:00:01.002) 0:00:46.540 ********* 2025-04-01 19:50:10.108575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.108970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part1', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part14', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part15', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part16', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.108995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab8dfbad-f338-4768-a4e7-f4b333b69279', 'scsi-SQEMU_QEMU_HARDDISK_ab8dfbad-f338-4768-a4e7-f4b333b69279'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124', 'scsi-SQEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5ebd160-46c7-4645-bffc-e57cafdc3124-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9b8ece6-9486-4a7c-9bf5-40c217f02d2d', 'scsi-SQEMU_QEMU_HARDDISK_a9b8ece6-9486-4a7c-9bf5-40c217f02d2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_146c5a64-e236-4e9d-aba9-c694e16f981b', 'scsi-SQEMU_QEMU_HARDDISK_146c5a64-e236-4e9d-aba9-c694e16f981b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75999ceb-501f-420c-8b43-800350cfb103', 'scsi-SQEMU_QEMU_HARDDISK_75999ceb-501f-420c-8b43-800350cfb103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8c239a65-4acd-4227-a388-0863223ee363', 'scsi-SQEMU_QEMU_HARDDISK_8c239a65-4acd-4227-a388-0863223ee363'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9ce3d96a-7a14-4bc8-9f00-60b125950ef0', 'scsi-SQEMU_QEMU_HARDDISK_9ce3d96a-7a14-4bc8-9f00-60b125950ef0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109459 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.109473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319', 'scsi-SQEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part1', 'scsi-SQEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part14', 'scsi-SQEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part15', 'scsi-SQEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part16', 'scsi-SQEMU_QEMU_HARDDISK_32d92a12-18a7-4405-92c1-c5a976ec5319-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe75b96a-3751-4707-9d8f-14bf0ebec7cf', 'scsi-SQEMU_QEMU_HARDDISK_fe75b96a-3751-4707-9d8f-14bf0ebec7cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c75ca34-724f-40ca-ac18-00bb9ef52260', 'scsi-SQEMU_QEMU_HARDDISK_9c75ca34-724f-40ca-ac18-00bb9ef52260'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4e136df-1ed3-4293-9f31-166cbf2340f4', 'scsi-SQEMU_QEMU_HARDDISK_d4e136df-1ed3-4293-9f31-166cbf2340f4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109912 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.109926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-52-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.109939 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.109952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdd573d7--384a--5f49--8a42--9b210b6d8834-osd--block--bdd573d7--384a--5f49--8a42--9b210b6d8834', 'dm-uuid-LVM-0HLscGhWI3BE3z58va0GpTBTPtoWQT6fdFHxJHx3khHHsXbjxB45Uwc2cTbz8X74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988d16a2--b35c--5840--9d7c--a8265d6d87f9-osd--block--988d16a2--b35c--5840--9d7c--a8265d6d87f9', 'dm-uuid-LVM-eDTev9OoC6b2zQ9jQohOhBnledSwj0ogaUvEhqgmKpBVoPinnkxjkzB5dSC7OO03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.109978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52229b2b--1fb5--50ba--ad18--deadbd92af76-osd--block--52229b2b--1fb5--50ba--ad18--deadbd92af76', 'dm-uuid-LVM-ZeWxycIrl6OP9tRFrVmx3b5VdT7GwK6d1hscJ5sC2ehnymNJfBMNwGuLf1L9gQRY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9675d24--a7d4--5c32--a36a--48aa524d4563-osd--block--b9675d24--a7d4--5c32--a36a--48aa524d4563', 'dm-uuid-LVM-UiMcCVQwJh1DUyUT83GdqDyaEOZCtj3YIGFAhIIKH3ULf6b5KAFZNugOABiaJArg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.110571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.110586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bdd573d7--384a--5f49--8a42--9b210b6d8834-osd--block--bdd573d7--384a--5f49--8a42--9b210b6d8834'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-joOaWb-DLqj-TzGF-jHyS-iq9J-Ab0E-X8GYc8', 'scsi-0QEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1', 'scsi-SQEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.110602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--988d16a2--b35c--5840--9d7c--a8265d6d87f9-osd--block--988d16a2--b35c--5840--9d7c--a8265d6d87f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eq9Cfd-RShp-tSW2-6REe-F5fP-szKw-3dyL23', 'scsi-0QEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72', 'scsi-SQEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.110615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03', 'scsi-SQEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--52229b2b--1fb5--50ba--ad18--deadbd92af76-osd--block--52229b2b--1fb5--50ba--ad18--deadbd92af76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WI5n3P-kwt6-sZBw-bMZg-KnjK-Px49-iD8yT6', 'scsi-0QEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c', 'scsi-SQEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b9675d24--a7d4--5c32--a36a--48aa524d4563-osd--block--b9675d24--a7d4--5c32--a36a--48aa524d4563'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0fq0AM-H2Ux-0173-mqqu-6LKu-Bsgu-eiVs3w', 'scsi-0QEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c', 'scsi-SQEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111439 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.111556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905', 'scsi-SQEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--959a80fb--1de6--50df--b35c--a247ba0dd9c7-osd--block--959a80fb--1de6--50df--b35c--a247ba0dd9c7', 'dm-uuid-LVM-V3SHFiLLYnCvanXpqDvqxOQH9zNG7t2501L1tIO6yDkizNtXxUkh1t3uosHJWRX0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050-osd--block--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050', 'dm-uuid-LVM-9JpExgtlxdPuoWmJNoQ2AZCX55bgBWMtMY2NJ988mICAB3y3WMqAu2EyPho90or4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.111683 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.111698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:50:10.111915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.112018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--959a80fb--1de6--50df--b35c--a247ba0dd9c7-osd--block--959a80fb--1de6--50df--b35c--a247ba0dd9c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sBWKjx-mczp-poSW-IrWk-PI53-Hypr-rwsvAM', 'scsi-0QEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7', 'scsi-SQEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.112079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050-osd--block--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xLZkou-nOM8-FMbI-J1uc-Uq2c-XtnG-NwevIN', 'scsi-0QEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c', 'scsi-SQEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.112096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8', 'scsi-SQEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.112111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:50:10.112126 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.112140 | orchestrator | 2025-04-01 19:50:10.112155 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-01 19:50:10.112171 | orchestrator | Tuesday 01 April 2025 19:37:01 +0000 (0:00:02.875) 0:00:49.415 ********* 2025-04-01 19:50:10.112186 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.112200 | orchestrator | 2025-04-01 19:50:10.112215 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-01 19:50:10.112230 | orchestrator | Tuesday 01 April 2025 19:37:02 +0000 (0:00:00.473) 0:00:49.889 ********* 2025-04-01 19:50:10.112244 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.112258 | orchestrator | 2025-04-01 19:50:10.112272 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-01 19:50:10.112286 | orchestrator | Tuesday 01 April 2025 19:37:02 +0000 (0:00:00.253) 0:00:50.143 ********* 2025-04-01 19:50:10.112310 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.112324 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.112339 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.112353 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.112367 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.112382 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.112395 | orchestrator | 2025-04-01 19:50:10.112424 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-01 19:50:10.112440 | orchestrator | Tuesday 01 April 2025 19:37:04 +0000 (0:00:01.691) 0:00:51.834 ********* 2025-04-01 19:50:10.112455 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.112470 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.112484 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.112498 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.112512 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.112526 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.112540 | orchestrator | 2025-04-01 19:50:10.112554 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-01 19:50:10.112568 | orchestrator | Tuesday 01 April 2025 19:37:05 +0000 (0:00:01.642) 0:00:53.477 ********* 2025-04-01 19:50:10.112582 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.112596 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.112610 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.112650 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.112668 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.112684 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.112698 | orchestrator | 2025-04-01 19:50:10.112714 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-01 19:50:10.112730 | orchestrator | Tuesday 01 April 2025 19:37:06 +0000 (0:00:01.132) 0:00:54.609 ********* 2025-04-01 19:50:10.112746 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.112762 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.112777 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.112793 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.112809 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.112908 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.112930 | orchestrator | 2025-04-01 19:50:10.112946 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-01 19:50:10.112962 | orchestrator | Tuesday 01 April 2025 19:37:07 +0000 (0:00:00.848) 0:00:55.458 ********* 2025-04-01 19:50:10.112977 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.112992 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.113006 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.113020 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.113034 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.113048 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.113062 | orchestrator | 2025-04-01 19:50:10.113076 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-01 19:50:10.113090 | orchestrator | Tuesday 01 April 2025 19:37:09 +0000 (0:00:01.165) 0:00:56.623 ********* 2025-04-01 19:50:10.113105 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.113118 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.113132 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.113146 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.113161 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.113175 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.113189 | orchestrator | 2025-04-01 19:50:10.113203 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-01 19:50:10.113217 | orchestrator | Tuesday 01 April 2025 19:37:09 +0000 (0:00:00.944) 0:00:57.567 ********* 2025-04-01 19:50:10.113231 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.113245 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.113259 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.113273 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.113296 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.113318 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.113332 | orchestrator | 2025-04-01 19:50:10.113346 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-01 19:50:10.113361 | orchestrator | Tuesday 01 April 2025 19:37:11 +0000 (0:00:01.199) 0:00:58.767 ********* 2025-04-01 19:50:10.113376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.113390 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-01 19:50:10.113405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.113419 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-01 19:50:10.113433 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-01 19:50:10.113447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.113462 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-01 19:50:10.113476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:50:10.113495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:50:10.113509 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-01 19:50:10.113523 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.113537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:50:10.113552 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-01 19:50:10.113565 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.113580 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.113594 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.113608 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:50:10.113642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:50:10.113658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:50:10.113672 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:50:10.113686 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:50:10.113700 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.113714 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:50:10.113728 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.113742 | orchestrator | 2025-04-01 19:50:10.113757 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-01 19:50:10.113771 | orchestrator | Tuesday 01 April 2025 19:37:14 +0000 (0:00:03.167) 0:01:01.935 ********* 2025-04-01 19:50:10.113785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.113799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-01 19:50:10.113813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.113827 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-01 19:50:10.113841 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-01 19:50:10.113855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:50:10.113869 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-01 19:50:10.113883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.113897 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-01 19:50:10.113911 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:50:10.113924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:50:10.113938 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.113952 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.113967 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-01 19:50:10.113981 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.113995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:50:10.114087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:50:10.114106 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.114121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:50:10.114135 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:50:10.114236 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.114257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:50:10.114272 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:50:10.114286 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.114300 | orchestrator | 2025-04-01 19:50:10.114315 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-01 19:50:10.114329 | orchestrator | Tuesday 01 April 2025 19:37:18 +0000 (0:00:03.780) 0:01:05.715 ********* 2025-04-01 19:50:10.114343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.114358 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-04-01 19:50:10.114372 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-01 19:50:10.114385 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-04-01 19:50:10.114399 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-01 19:50:10.114414 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-04-01 19:50:10.114428 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-01 19:50:10.114442 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-01 19:50:10.114456 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-01 19:50:10.114470 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-04-01 19:50:10.114484 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-01 19:50:10.114498 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-04-01 19:50:10.114512 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-01 19:50:10.114526 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-04-01 19:50:10.114540 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-01 19:50:10.114554 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-01 19:50:10.114568 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-01 19:50:10.114582 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-01 19:50:10.114596 | orchestrator | 2025-04-01 19:50:10.114610 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-01 19:50:10.114678 | orchestrator | Tuesday 01 April 2025 19:37:27 +0000 (0:00:08.995) 0:01:14.711 ********* 2025-04-01 19:50:10.114695 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.114709 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.114723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.114737 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.114751 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-01 19:50:10.114765 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-01 19:50:10.114779 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-01 19:50:10.114793 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.114807 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-01 19:50:10.114821 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-01 19:50:10.114856 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-01 19:50:10.114879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:50:10.114897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:50:10.114912 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.114928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:50:10.114943 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:50:10.114968 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:50:10.114984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:50:10.114999 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.115015 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.115030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:50:10.115046 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:50:10.115061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:50:10.115076 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.115092 | orchestrator | 2025-04-01 19:50:10.115108 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-01 19:50:10.115124 | orchestrator | Tuesday 01 April 2025 19:37:29 +0000 (0:00:02.514) 0:01:17.225 ********* 2025-04-01 19:50:10.115139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.115160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.115176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.115192 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-01 19:50:10.115207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-01 19:50:10.115221 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-01 19:50:10.115235 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.115250 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-01 19:50:10.115264 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.115278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-01 19:50:10.115291 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-01 19:50:10.115305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:50:10.115319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:50:10.115333 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.115348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:50:10.115362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:50:10.115463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:50:10.115485 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.115500 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:50:10.115515 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.115530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:50:10.115544 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:50:10.115559 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:50:10.115573 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.115588 | orchestrator | 2025-04-01 19:50:10.115603 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-01 19:50:10.115618 | orchestrator | Tuesday 01 April 2025 19:37:30 +0000 (0:00:01.296) 0:01:18.522 ********* 2025-04-01 19:50:10.115655 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-01 19:50:10.115671 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:50:10.115686 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:50:10.115701 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:50:10.115715 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-04-01 19:50:10.115729 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:50:10.115743 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:50:10.115766 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:50:10.115781 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-04-01 19:50:10.115795 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:50:10.115809 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:50:10.115823 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:50:10.115838 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.115852 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:50:10.115867 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:50:10.115881 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:50:10.115908 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.115923 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:50:10.115938 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:50:10.115952 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:50:10.115966 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.115980 | orchestrator | 2025-04-01 19:50:10.115995 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-01 19:50:10.116009 | orchestrator | Tuesday 01 April 2025 19:37:32 +0000 (0:00:01.963) 0:01:20.486 ********* 2025-04-01 19:50:10.116023 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.116038 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.116052 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.116066 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.116081 | orchestrator | 2025-04-01 19:50:10.116097 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.116114 | orchestrator | Tuesday 01 April 2025 19:37:34 +0000 (0:00:02.001) 0:01:22.487 ********* 2025-04-01 19:50:10.116129 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.116145 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.116160 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.116176 | orchestrator | 2025-04-01 19:50:10.116191 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.116207 | orchestrator | Tuesday 01 April 2025 19:37:35 +0000 (0:00:00.803) 0:01:23.291 ********* 2025-04-01 19:50:10.116222 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.116238 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.116253 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.116269 | orchestrator | 2025-04-01 19:50:10.116284 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.116299 | orchestrator | Tuesday 01 April 2025 19:37:37 +0000 (0:00:01.434) 0:01:24.725 ********* 2025-04-01 19:50:10.116315 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.116330 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.116345 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.116361 | orchestrator | 2025-04-01 19:50:10.116376 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.116392 | orchestrator | Tuesday 01 April 2025 19:37:37 +0000 (0:00:00.636) 0:01:25.361 ********* 2025-04-01 19:50:10.116407 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.116423 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.116445 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.116459 | orchestrator | 2025-04-01 19:50:10.116473 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.116564 | orchestrator | Tuesday 01 April 2025 19:37:38 +0000 (0:00:01.058) 0:01:26.420 ********* 2025-04-01 19:50:10.116584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.116599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.116614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.116652 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.116668 | orchestrator | 2025-04-01 19:50:10.116683 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.116697 | orchestrator | Tuesday 01 April 2025 19:37:39 +0000 (0:00:00.785) 0:01:27.206 ********* 2025-04-01 19:50:10.116712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.116726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.116740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.116755 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.116769 | orchestrator | 2025-04-01 19:50:10.116784 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.116798 | orchestrator | Tuesday 01 April 2025 19:37:41 +0000 (0:00:01.531) 0:01:28.737 ********* 2025-04-01 19:50:10.116812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.116827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.116841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.116855 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.116877 | orchestrator | 2025-04-01 19:50:10.116892 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.116906 | orchestrator | Tuesday 01 April 2025 19:37:42 +0000 (0:00:00.989) 0:01:29.727 ********* 2025-04-01 19:50:10.116921 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.116936 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.116950 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.116965 | orchestrator | 2025-04-01 19:50:10.116979 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.117011 | orchestrator | Tuesday 01 April 2025 19:37:43 +0000 (0:00:01.440) 0:01:31.167 ********* 2025-04-01 19:50:10.117026 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-01 19:50:10.117040 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-01 19:50:10.117062 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-01 19:50:10.117076 | orchestrator | 2025-04-01 19:50:10.117090 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.117104 | orchestrator | Tuesday 01 April 2025 19:37:44 +0000 (0:00:01.326) 0:01:32.494 ********* 2025-04-01 19:50:10.117118 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.117133 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.117147 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.117161 | orchestrator | 2025-04-01 19:50:10.117175 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.117190 | orchestrator | Tuesday 01 April 2025 19:37:45 +0000 (0:00:00.845) 0:01:33.340 ********* 2025-04-01 19:50:10.117205 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.117221 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.117237 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.117252 | orchestrator | 2025-04-01 19:50:10.117268 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.117283 | orchestrator | Tuesday 01 April 2025 19:37:46 +0000 (0:00:00.603) 0:01:33.944 ********* 2025-04-01 19:50:10.117298 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.117314 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.117330 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.117354 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.117371 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.117386 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.117401 | orchestrator | 2025-04-01 19:50:10.117417 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.117432 | orchestrator | Tuesday 01 April 2025 19:37:47 +0000 (0:00:01.173) 0:01:35.118 ********* 2025-04-01 19:50:10.117448 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.117464 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.117480 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.117496 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.117511 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.117527 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.117543 | orchestrator | 2025-04-01 19:50:10.117563 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.117577 | orchestrator | Tuesday 01 April 2025 19:37:48 +0000 (0:00:00.764) 0:01:35.882 ********* 2025-04-01 19:50:10.117591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.117606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.117620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.117654 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.117668 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.117682 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.117696 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.117710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.117724 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.117738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.117831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.117851 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.117866 | orchestrator | 2025-04-01 19:50:10.117880 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-01 19:50:10.117894 | orchestrator | Tuesday 01 April 2025 19:37:49 +0000 (0:00:00.941) 0:01:36.824 ********* 2025-04-01 19:50:10.117908 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.117923 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.117937 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.117951 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.117965 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.117979 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.117993 | orchestrator | 2025-04-01 19:50:10.118007 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-01 19:50:10.118058 | orchestrator | Tuesday 01 April 2025 19:37:50 +0000 (0:00:01.005) 0:01:37.829 ********* 2025-04-01 19:50:10.118075 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.118089 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:50:10.118103 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:50:10.118117 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-01 19:50:10.118131 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-01 19:50:10.118145 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-01 19:50:10.118168 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-01 19:50:10.118182 | orchestrator | 2025-04-01 19:50:10.118197 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-01 19:50:10.118211 | orchestrator | Tuesday 01 April 2025 19:37:51 +0000 (0:00:01.449) 0:01:39.279 ********* 2025-04-01 19:50:10.118225 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.118239 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:50:10.118253 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:50:10.118267 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-01 19:50:10.118281 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-01 19:50:10.118295 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-01 19:50:10.118309 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-01 19:50:10.118323 | orchestrator | 2025-04-01 19:50:10.118337 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.118351 | orchestrator | Tuesday 01 April 2025 19:37:54 +0000 (0:00:02.925) 0:01:42.204 ********* 2025-04-01 19:50:10.118367 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.118382 | orchestrator | 2025-04-01 19:50:10.118396 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.118410 | orchestrator | Tuesday 01 April 2025 19:37:56 +0000 (0:00:01.488) 0:01:43.693 ********* 2025-04-01 19:50:10.118424 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.118438 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.118452 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.118466 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.118480 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.118494 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.118509 | orchestrator | 2025-04-01 19:50:10.118523 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.118537 | orchestrator | Tuesday 01 April 2025 19:37:57 +0000 (0:00:00.975) 0:01:44.668 ********* 2025-04-01 19:50:10.118551 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.118564 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.118578 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.118592 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.118606 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.118620 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.118692 | orchestrator | 2025-04-01 19:50:10.118707 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.118722 | orchestrator | Tuesday 01 April 2025 19:37:58 +0000 (0:00:01.813) 0:01:46.482 ********* 2025-04-01 19:50:10.118736 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.118750 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.118764 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.118778 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.118792 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.118806 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.118820 | orchestrator | 2025-04-01 19:50:10.118834 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.118848 | orchestrator | Tuesday 01 April 2025 19:38:00 +0000 (0:00:01.627) 0:01:48.110 ********* 2025-04-01 19:50:10.118862 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.118876 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.118890 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.118904 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.118918 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.118939 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.118953 | orchestrator | 2025-04-01 19:50:10.118967 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.118981 | orchestrator | Tuesday 01 April 2025 19:38:01 +0000 (0:00:01.486) 0:01:49.597 ********* 2025-04-01 19:50:10.118995 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.119009 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.119111 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.119130 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.119143 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.119161 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.119174 | orchestrator | 2025-04-01 19:50:10.119187 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.119200 | orchestrator | Tuesday 01 April 2025 19:38:03 +0000 (0:00:01.079) 0:01:50.677 ********* 2025-04-01 19:50:10.119212 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.119225 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.119237 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.119249 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.119262 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.119274 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.119286 | orchestrator | 2025-04-01 19:50:10.119299 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.119311 | orchestrator | Tuesday 01 April 2025 19:38:04 +0000 (0:00:01.155) 0:01:51.832 ********* 2025-04-01 19:50:10.119324 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.119336 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.119348 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.119361 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.119373 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.119385 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.119397 | orchestrator | 2025-04-01 19:50:10.119410 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.119422 | orchestrator | Tuesday 01 April 2025 19:38:04 +0000 (0:00:00.695) 0:01:52.527 ********* 2025-04-01 19:50:10.119435 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.119447 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.119459 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.119472 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.119484 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.119496 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.119509 | orchestrator | 2025-04-01 19:50:10.119521 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.119534 | orchestrator | Tuesday 01 April 2025 19:38:05 +0000 (0:00:00.972) 0:01:53.500 ********* 2025-04-01 19:50:10.119546 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.119558 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.119571 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.119583 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.119595 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.119608 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.119620 | orchestrator | 2025-04-01 19:50:10.119649 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.119662 | orchestrator | Tuesday 01 April 2025 19:38:06 +0000 (0:00:00.732) 0:01:54.233 ********* 2025-04-01 19:50:10.119694 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.119708 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.119720 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.119734 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.119747 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.119761 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.119774 | orchestrator | 2025-04-01 19:50:10.119788 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.119810 | orchestrator | Tuesday 01 April 2025 19:38:07 +0000 (0:00:00.920) 0:01:55.153 ********* 2025-04-01 19:50:10.119825 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.119839 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.119852 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.119866 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.119880 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.119893 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.119907 | orchestrator | 2025-04-01 19:50:10.119921 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.119934 | orchestrator | Tuesday 01 April 2025 19:38:08 +0000 (0:00:01.041) 0:01:56.195 ********* 2025-04-01 19:50:10.119948 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.119961 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.119975 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.119989 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.120002 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.120016 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.120029 | orchestrator | 2025-04-01 19:50:10.120043 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.120056 | orchestrator | Tuesday 01 April 2025 19:38:09 +0000 (0:00:00.945) 0:01:57.141 ********* 2025-04-01 19:50:10.120070 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.120084 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.120096 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.120109 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.120121 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.120134 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.120146 | orchestrator | 2025-04-01 19:50:10.120159 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.120171 | orchestrator | Tuesday 01 April 2025 19:38:10 +0000 (0:00:00.653) 0:01:57.794 ********* 2025-04-01 19:50:10.120184 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.120196 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.120209 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.120230 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.120244 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.120257 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.120270 | orchestrator | 2025-04-01 19:50:10.120282 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.120295 | orchestrator | Tuesday 01 April 2025 19:38:11 +0000 (0:00:01.129) 0:01:58.923 ********* 2025-04-01 19:50:10.120308 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.120320 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.120333 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.120345 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.120358 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.120370 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.120382 | orchestrator | 2025-04-01 19:50:10.120395 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.120480 | orchestrator | Tuesday 01 April 2025 19:38:12 +0000 (0:00:01.044) 0:01:59.968 ********* 2025-04-01 19:50:10.120500 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.120513 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.120526 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.120539 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.120553 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.120565 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.120578 | orchestrator | 2025-04-01 19:50:10.120592 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.120605 | orchestrator | Tuesday 01 April 2025 19:38:13 +0000 (0:00:01.100) 0:02:01.068 ********* 2025-04-01 19:50:10.120618 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.120648 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.120662 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.120682 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.120694 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.120707 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.120719 | orchestrator | 2025-04-01 19:50:10.120731 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.120744 | orchestrator | Tuesday 01 April 2025 19:38:14 +0000 (0:00:00.641) 0:02:01.709 ********* 2025-04-01 19:50:10.120756 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.120769 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.120781 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.120793 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.120806 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.120818 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.120831 | orchestrator | 2025-04-01 19:50:10.120843 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.120855 | orchestrator | Tuesday 01 April 2025 19:38:15 +0000 (0:00:00.992) 0:02:02.702 ********* 2025-04-01 19:50:10.120868 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.120880 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.120893 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.120905 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.120918 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.120930 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.120942 | orchestrator | 2025-04-01 19:50:10.120955 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.120967 | orchestrator | Tuesday 01 April 2025 19:38:15 +0000 (0:00:00.658) 0:02:03.360 ********* 2025-04-01 19:50:10.120980 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.120993 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.121005 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.121018 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.121031 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.121043 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.121055 | orchestrator | 2025-04-01 19:50:10.121068 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.121086 | orchestrator | Tuesday 01 April 2025 19:38:16 +0000 (0:00:00.961) 0:02:04.322 ********* 2025-04-01 19:50:10.121099 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121113 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121126 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121140 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121153 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121167 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121180 | orchestrator | 2025-04-01 19:50:10.121194 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.121208 | orchestrator | Tuesday 01 April 2025 19:38:17 +0000 (0:00:00.675) 0:02:04.998 ********* 2025-04-01 19:50:10.121221 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121234 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121248 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121261 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121274 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121293 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121307 | orchestrator | 2025-04-01 19:50:10.121320 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.121334 | orchestrator | Tuesday 01 April 2025 19:38:18 +0000 (0:00:00.901) 0:02:05.899 ********* 2025-04-01 19:50:10.121347 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121361 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121375 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121388 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121402 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121415 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121439 | orchestrator | 2025-04-01 19:50:10.121453 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.121467 | orchestrator | Tuesday 01 April 2025 19:38:19 +0000 (0:00:00.782) 0:02:06.682 ********* 2025-04-01 19:50:10.121480 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121492 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121505 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121517 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121530 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121542 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121554 | orchestrator | 2025-04-01 19:50:10.121567 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.121579 | orchestrator | Tuesday 01 April 2025 19:38:20 +0000 (0:00:01.001) 0:02:07.683 ********* 2025-04-01 19:50:10.121592 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121605 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121617 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121645 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121658 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121670 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121683 | orchestrator | 2025-04-01 19:50:10.121695 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.121708 | orchestrator | Tuesday 01 April 2025 19:38:20 +0000 (0:00:00.911) 0:02:08.595 ********* 2025-04-01 19:50:10.121720 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121733 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121745 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121757 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121769 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121782 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121794 | orchestrator | 2025-04-01 19:50:10.121876 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.121895 | orchestrator | Tuesday 01 April 2025 19:38:21 +0000 (0:00:00.912) 0:02:09.507 ********* 2025-04-01 19:50:10.121909 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.121922 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.121935 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.121947 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.121960 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.121973 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.121985 | orchestrator | 2025-04-01 19:50:10.121998 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.122012 | orchestrator | Tuesday 01 April 2025 19:38:22 +0000 (0:00:00.784) 0:02:10.292 ********* 2025-04-01 19:50:10.122052 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.122065 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.122077 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.122090 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.122102 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.122115 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.122127 | orchestrator | 2025-04-01 19:50:10.122140 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.122153 | orchestrator | Tuesday 01 April 2025 19:38:23 +0000 (0:00:01.097) 0:02:11.390 ********* 2025-04-01 19:50:10.122165 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.122178 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.122190 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.122203 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.122215 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.122227 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.122240 | orchestrator | 2025-04-01 19:50:10.122252 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.122272 | orchestrator | Tuesday 01 April 2025 19:38:24 +0000 (0:00:00.921) 0:02:12.311 ********* 2025-04-01 19:50:10.122285 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.122298 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.122310 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.122323 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.122335 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.122348 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.122360 | orchestrator | 2025-04-01 19:50:10.122373 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.122385 | orchestrator | Tuesday 01 April 2025 19:38:25 +0000 (0:00:00.963) 0:02:13.274 ********* 2025-04-01 19:50:10.122398 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.122410 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.122429 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.122441 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.122454 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.122466 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.122496 | orchestrator | 2025-04-01 19:50:10.122511 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.122525 | orchestrator | Tuesday 01 April 2025 19:38:26 +0000 (0:00:00.691) 0:02:13.966 ********* 2025-04-01 19:50:10.122539 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.122552 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.122566 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.122580 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.122594 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.122608 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.122621 | orchestrator | 2025-04-01 19:50:10.122682 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.122697 | orchestrator | Tuesday 01 April 2025 19:38:27 +0000 (0:00:00.957) 0:02:14.924 ********* 2025-04-01 19:50:10.122711 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.122725 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.122738 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.122752 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.122766 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.122780 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.122793 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.122807 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.122821 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.122835 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.122849 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.122861 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.122874 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.122886 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.122898 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.122911 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.122929 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.122941 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.122954 | orchestrator | 2025-04-01 19:50:10.122966 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.122979 | orchestrator | Tuesday 01 April 2025 19:38:28 +0000 (0:00:00.703) 0:02:15.627 ********* 2025-04-01 19:50:10.122991 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-01 19:50:10.123004 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-01 19:50:10.123016 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123029 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-01 19:50:10.123048 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-01 19:50:10.123060 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123073 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-01 19:50:10.123085 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-01 19:50:10.123179 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123195 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-01 19:50:10.123206 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-01 19:50:10.123216 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123226 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-01 19:50:10.123237 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-01 19:50:10.123247 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123257 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-01 19:50:10.123267 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-01 19:50:10.123278 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123288 | orchestrator | 2025-04-01 19:50:10.123298 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.123308 | orchestrator | Tuesday 01 April 2025 19:38:29 +0000 (0:00:01.004) 0:02:16.631 ********* 2025-04-01 19:50:10.123318 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123329 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123339 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123350 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123360 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123370 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123380 | orchestrator | 2025-04-01 19:50:10.123390 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.123400 | orchestrator | Tuesday 01 April 2025 19:38:29 +0000 (0:00:00.669) 0:02:17.300 ********* 2025-04-01 19:50:10.123410 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123420 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123430 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123441 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123451 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123461 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123471 | orchestrator | 2025-04-01 19:50:10.123482 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.123493 | orchestrator | Tuesday 01 April 2025 19:38:30 +0000 (0:00:00.956) 0:02:18.257 ********* 2025-04-01 19:50:10.123503 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123514 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123524 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123534 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123544 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123554 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123564 | orchestrator | 2025-04-01 19:50:10.123574 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.123585 | orchestrator | Tuesday 01 April 2025 19:38:31 +0000 (0:00:00.761) 0:02:19.019 ********* 2025-04-01 19:50:10.123595 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123605 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123615 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123639 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123650 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123661 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123671 | orchestrator | 2025-04-01 19:50:10.123681 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.123692 | orchestrator | Tuesday 01 April 2025 19:38:32 +0000 (0:00:00.989) 0:02:20.008 ********* 2025-04-01 19:50:10.123709 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123719 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123729 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123740 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123750 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123765 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123775 | orchestrator | 2025-04-01 19:50:10.123789 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.123801 | orchestrator | Tuesday 01 April 2025 19:38:33 +0000 (0:00:00.770) 0:02:20.779 ********* 2025-04-01 19:50:10.123813 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123824 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.123835 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.123846 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.123858 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.123869 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.123880 | orchestrator | 2025-04-01 19:50:10.123891 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.123902 | orchestrator | Tuesday 01 April 2025 19:38:34 +0000 (0:00:00.989) 0:02:21.769 ********* 2025-04-01 19:50:10.123913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.123925 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.123936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.123947 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.123959 | orchestrator | 2025-04-01 19:50:10.123970 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.123981 | orchestrator | Tuesday 01 April 2025 19:38:34 +0000 (0:00:00.451) 0:02:22.221 ********* 2025-04-01 19:50:10.123993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.124004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.124015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.124026 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124038 | orchestrator | 2025-04-01 19:50:10.124049 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.124060 | orchestrator | Tuesday 01 April 2025 19:38:35 +0000 (0:00:00.429) 0:02:22.650 ********* 2025-04-01 19:50:10.124071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.124082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.124093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.124162 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124177 | orchestrator | 2025-04-01 19:50:10.124187 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.124198 | orchestrator | Tuesday 01 April 2025 19:38:35 +0000 (0:00:00.452) 0:02:23.102 ********* 2025-04-01 19:50:10.124208 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124218 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.124228 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.124238 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.124248 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.124258 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.124268 | orchestrator | 2025-04-01 19:50:10.124279 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.124289 | orchestrator | Tuesday 01 April 2025 19:38:36 +0000 (0:00:00.891) 0:02:23.994 ********* 2025-04-01 19:50:10.124331 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.124343 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124353 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.124364 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.124374 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.124390 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.124400 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.124410 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.124421 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.124431 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.124441 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.124451 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.124461 | orchestrator | 2025-04-01 19:50:10.124472 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.124482 | orchestrator | Tuesday 01 April 2025 19:38:37 +0000 (0:00:00.936) 0:02:24.931 ********* 2025-04-01 19:50:10.124492 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124502 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.124512 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.124523 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.124533 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.124543 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.124553 | orchestrator | 2025-04-01 19:50:10.124563 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.124573 | orchestrator | Tuesday 01 April 2025 19:38:38 +0000 (0:00:00.941) 0:02:25.872 ********* 2025-04-01 19:50:10.124583 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124594 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.124604 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.124614 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.124638 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.124649 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.124660 | orchestrator | 2025-04-01 19:50:10.124670 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.124680 | orchestrator | Tuesday 01 April 2025 19:38:38 +0000 (0:00:00.662) 0:02:26.535 ********* 2025-04-01 19:50:10.124690 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.124701 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124711 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.124721 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.124731 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.124742 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.124752 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.124762 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.124772 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.124782 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.124793 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.124804 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.124816 | orchestrator | 2025-04-01 19:50:10.124827 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.124854 | orchestrator | Tuesday 01 April 2025 19:38:40 +0000 (0:00:01.173) 0:02:27.708 ********* 2025-04-01 19:50:10.124865 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.124877 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.124888 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.124899 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.124911 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.124927 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.124938 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.124950 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.124961 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.124978 | orchestrator | 2025-04-01 19:50:10.124990 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.125001 | orchestrator | Tuesday 01 April 2025 19:38:40 +0000 (0:00:00.798) 0:02:28.506 ********* 2025-04-01 19:50:10.125012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.125023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.125035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.125046 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-01 19:50:10.125057 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-01 19:50:10.125068 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-01 19:50:10.125079 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.125095 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-01 19:50:10.125170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-01 19:50:10.125184 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-01 19:50:10.125195 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.125206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.125216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.125226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.125236 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.125246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.125256 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.125266 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.125276 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.125286 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.125297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.125307 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.125317 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.125327 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.125343 | orchestrator | 2025-04-01 19:50:10.125354 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.125364 | orchestrator | Tuesday 01 April 2025 19:38:43 +0000 (0:00:02.144) 0:02:30.651 ********* 2025-04-01 19:50:10.125375 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.125386 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.125396 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.125407 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.125417 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.125427 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.125437 | orchestrator | 2025-04-01 19:50:10.125447 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.125458 | orchestrator | Tuesday 01 April 2025 19:38:45 +0000 (0:00:02.042) 0:02:32.693 ********* 2025-04-01 19:50:10.125468 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.125478 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.125488 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.125498 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.125508 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.125519 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.125529 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.125539 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.125549 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.125559 | orchestrator | 2025-04-01 19:50:10.125569 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.125580 | orchestrator | Tuesday 01 April 2025 19:38:46 +0000 (0:00:01.623) 0:02:34.317 ********* 2025-04-01 19:50:10.125600 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.125610 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.125620 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.125675 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.125686 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.125696 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.125707 | orchestrator | 2025-04-01 19:50:10.125717 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.125727 | orchestrator | Tuesday 01 April 2025 19:38:48 +0000 (0:00:01.853) 0:02:36.171 ********* 2025-04-01 19:50:10.125738 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.125748 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.125758 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.125768 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.125779 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.125789 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.125800 | orchestrator | 2025-04-01 19:50:10.125810 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-04-01 19:50:10.125819 | orchestrator | Tuesday 01 April 2025 19:38:50 +0000 (0:00:01.653) 0:02:37.824 ********* 2025-04-01 19:50:10.125828 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.125837 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.125847 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.125856 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.125865 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.125875 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.125884 | orchestrator | 2025-04-01 19:50:10.125898 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-04-01 19:50:10.125909 | orchestrator | Tuesday 01 April 2025 19:38:52 +0000 (0:00:01.833) 0:02:39.658 ********* 2025-04-01 19:50:10.125918 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.125928 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.125937 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.125946 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.125956 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.125965 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.125974 | orchestrator | 2025-04-01 19:50:10.125984 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-04-01 19:50:10.125994 | orchestrator | Tuesday 01 April 2025 19:38:54 +0000 (0:00:02.009) 0:02:41.667 ********* 2025-04-01 19:50:10.126004 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.126014 | orchestrator | 2025-04-01 19:50:10.126054 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-04-01 19:50:10.126063 | orchestrator | Tuesday 01 April 2025 19:38:55 +0000 (0:00:01.424) 0:02:43.091 ********* 2025-04-01 19:50:10.126073 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.126083 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.126092 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.126102 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.126111 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.126121 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.126130 | orchestrator | 2025-04-01 19:50:10.126196 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-04-01 19:50:10.126209 | orchestrator | Tuesday 01 April 2025 19:38:56 +0000 (0:00:00.952) 0:02:44.044 ********* 2025-04-01 19:50:10.126218 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.126226 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.126235 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.126244 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.126253 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.126273 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.126281 | orchestrator | 2025-04-01 19:50:10.126290 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-04-01 19:50:10.126299 | orchestrator | Tuesday 01 April 2025 19:38:57 +0000 (0:00:00.626) 0:02:44.671 ********* 2025-04-01 19:50:10.126308 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-01 19:50:10.126316 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-01 19:50:10.126325 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-01 19:50:10.126334 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-01 19:50:10.126342 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-01 19:50:10.126351 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-01 19:50:10.126360 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-01 19:50:10.126368 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-01 19:50:10.126377 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-01 19:50:10.126386 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-01 19:50:10.126394 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-01 19:50:10.126403 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-01 19:50:10.126412 | orchestrator | 2025-04-01 19:50:10.126420 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-04-01 19:50:10.126429 | orchestrator | Tuesday 01 April 2025 19:38:59 +0000 (0:00:02.029) 0:02:46.700 ********* 2025-04-01 19:50:10.126437 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.126446 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.126459 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.126468 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.126477 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.126485 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.126494 | orchestrator | 2025-04-01 19:50:10.126503 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-04-01 19:50:10.126511 | orchestrator | Tuesday 01 April 2025 19:39:00 +0000 (0:00:01.735) 0:02:48.435 ********* 2025-04-01 19:50:10.126520 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.126529 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.126537 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.126546 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.126554 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.126563 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.126571 | orchestrator | 2025-04-01 19:50:10.126580 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-04-01 19:50:10.126589 | orchestrator | Tuesday 01 April 2025 19:39:01 +0000 (0:00:01.049) 0:02:49.485 ********* 2025-04-01 19:50:10.126597 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.126606 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.126615 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.126637 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.126646 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.126655 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.126664 | orchestrator | 2025-04-01 19:50:10.126672 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-04-01 19:50:10.126681 | orchestrator | Tuesday 01 April 2025 19:39:03 +0000 (0:00:01.321) 0:02:50.807 ********* 2025-04-01 19:50:10.126690 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.126705 | orchestrator | 2025-04-01 19:50:10.126714 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-04-01 19:50:10.126723 | orchestrator | Tuesday 01 April 2025 19:39:05 +0000 (0:00:01.853) 0:02:52.661 ********* 2025-04-01 19:50:10.126731 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.126754 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.126763 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.126772 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.126781 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.126789 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.126798 | orchestrator | 2025-04-01 19:50:10.126811 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-04-01 19:50:10.126820 | orchestrator | Tuesday 01 April 2025 19:39:34 +0000 (0:00:29.410) 0:03:22.072 ********* 2025-04-01 19:50:10.126829 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-01 19:50:10.126839 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-01 19:50:10.126848 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-01 19:50:10.126857 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.126867 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-01 19:50:10.126877 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-01 19:50:10.126940 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-01 19:50:10.126953 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.126963 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-01 19:50:10.126974 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-01 19:50:10.126984 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-01 19:50:10.126994 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127004 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-01 19:50:10.127013 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-01 19:50:10.127023 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-01 19:50:10.127033 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127043 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-01 19:50:10.127052 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-01 19:50:10.127062 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-01 19:50:10.127072 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127081 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-01 19:50:10.127091 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-01 19:50:10.127101 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-01 19:50:10.127111 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127121 | orchestrator | 2025-04-01 19:50:10.127130 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-04-01 19:50:10.127140 | orchestrator | Tuesday 01 April 2025 19:39:35 +0000 (0:00:01.077) 0:03:23.149 ********* 2025-04-01 19:50:10.127150 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127159 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127169 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127179 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127189 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127197 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127206 | orchestrator | 2025-04-01 19:50:10.127215 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-04-01 19:50:10.127230 | orchestrator | Tuesday 01 April 2025 19:39:36 +0000 (0:00:00.799) 0:03:23.948 ********* 2025-04-01 19:50:10.127239 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127248 | orchestrator | 2025-04-01 19:50:10.127257 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-04-01 19:50:10.127266 | orchestrator | Tuesday 01 April 2025 19:39:36 +0000 (0:00:00.210) 0:03:24.159 ********* 2025-04-01 19:50:10.127274 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127283 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127292 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127301 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127310 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127319 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127328 | orchestrator | 2025-04-01 19:50:10.127337 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-04-01 19:50:10.127346 | orchestrator | Tuesday 01 April 2025 19:39:37 +0000 (0:00:01.168) 0:03:25.328 ********* 2025-04-01 19:50:10.127354 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127363 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127372 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127381 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127390 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127399 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127407 | orchestrator | 2025-04-01 19:50:10.127416 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-04-01 19:50:10.127425 | orchestrator | Tuesday 01 April 2025 19:39:38 +0000 (0:00:00.783) 0:03:26.111 ********* 2025-04-01 19:50:10.127434 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127443 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127452 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127460 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127474 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127483 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127493 | orchestrator | 2025-04-01 19:50:10.127501 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-04-01 19:50:10.127514 | orchestrator | Tuesday 01 April 2025 19:39:39 +0000 (0:00:01.052) 0:03:27.163 ********* 2025-04-01 19:50:10.127523 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.127532 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.127541 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.127550 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.127558 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.127567 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.127576 | orchestrator | 2025-04-01 19:50:10.127585 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-04-01 19:50:10.127594 | orchestrator | Tuesday 01 April 2025 19:39:41 +0000 (0:00:01.815) 0:03:28.979 ********* 2025-04-01 19:50:10.127603 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.127612 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.127620 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.127643 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.127652 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.127661 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.127669 | orchestrator | 2025-04-01 19:50:10.127678 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-04-01 19:50:10.127687 | orchestrator | Tuesday 01 April 2025 19:39:42 +0000 (0:00:01.297) 0:03:30.276 ********* 2025-04-01 19:50:10.127696 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.127706 | orchestrator | 2025-04-01 19:50:10.127763 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-04-01 19:50:10.127776 | orchestrator | Tuesday 01 April 2025 19:39:44 +0000 (0:00:01.780) 0:03:32.056 ********* 2025-04-01 19:50:10.127792 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127801 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127810 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127819 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127828 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127837 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127845 | orchestrator | 2025-04-01 19:50:10.127854 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-04-01 19:50:10.127863 | orchestrator | Tuesday 01 April 2025 19:39:45 +0000 (0:00:01.206) 0:03:33.262 ********* 2025-04-01 19:50:10.127872 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127881 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127890 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127899 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127908 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127917 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.127925 | orchestrator | 2025-04-01 19:50:10.127934 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-04-01 19:50:10.127943 | orchestrator | Tuesday 01 April 2025 19:39:46 +0000 (0:00:00.855) 0:03:34.118 ********* 2025-04-01 19:50:10.127952 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.127961 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.127970 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.127979 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.127988 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.127996 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.128005 | orchestrator | 2025-04-01 19:50:10.128014 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-04-01 19:50:10.128023 | orchestrator | Tuesday 01 April 2025 19:39:47 +0000 (0:00:00.960) 0:03:35.078 ********* 2025-04-01 19:50:10.128032 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.128041 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.128050 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.128059 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.128067 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.128076 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.128085 | orchestrator | 2025-04-01 19:50:10.128094 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-04-01 19:50:10.128103 | orchestrator | Tuesday 01 April 2025 19:39:48 +0000 (0:00:00.827) 0:03:35.906 ********* 2025-04-01 19:50:10.128112 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.128121 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.128130 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.128138 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.128147 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.128156 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.128165 | orchestrator | 2025-04-01 19:50:10.128174 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-04-01 19:50:10.128183 | orchestrator | Tuesday 01 April 2025 19:39:49 +0000 (0:00:01.274) 0:03:37.181 ********* 2025-04-01 19:50:10.128192 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.128201 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.128210 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.128219 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.128228 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.128237 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.128245 | orchestrator | 2025-04-01 19:50:10.128254 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-04-01 19:50:10.128263 | orchestrator | Tuesday 01 April 2025 19:39:50 +0000 (0:00:00.714) 0:03:37.895 ********* 2025-04-01 19:50:10.128284 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.128298 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.128307 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.128320 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.128328 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.128337 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.128345 | orchestrator | 2025-04-01 19:50:10.128354 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-04-01 19:50:10.128363 | orchestrator | Tuesday 01 April 2025 19:39:51 +0000 (0:00:01.007) 0:03:38.903 ********* 2025-04-01 19:50:10.128371 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.128380 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.128388 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.128398 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.128408 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.128417 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.128427 | orchestrator | 2025-04-01 19:50:10.128436 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.128446 | orchestrator | Tuesday 01 April 2025 19:39:52 +0000 (0:00:01.689) 0:03:40.592 ********* 2025-04-01 19:50:10.128456 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.128465 | orchestrator | 2025-04-01 19:50:10.128475 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-04-01 19:50:10.128484 | orchestrator | Tuesday 01 April 2025 19:39:54 +0000 (0:00:01.645) 0:03:42.238 ********* 2025-04-01 19:50:10.128494 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-04-01 19:50:10.128503 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-04-01 19:50:10.128513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-04-01 19:50:10.128522 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-04-01 19:50:10.128532 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-04-01 19:50:10.128541 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-04-01 19:50:10.128550 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-04-01 19:50:10.128560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-04-01 19:50:10.128619 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-04-01 19:50:10.128645 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-04-01 19:50:10.128655 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-04-01 19:50:10.128665 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-04-01 19:50:10.128674 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-04-01 19:50:10.128684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-04-01 19:50:10.128693 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-04-01 19:50:10.128703 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-04-01 19:50:10.128712 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-04-01 19:50:10.128722 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-04-01 19:50:10.128732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-04-01 19:50:10.128742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-04-01 19:50:10.128751 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-04-01 19:50:10.128760 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-04-01 19:50:10.128768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-04-01 19:50:10.128777 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-04-01 19:50:10.128785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-04-01 19:50:10.128794 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-04-01 19:50:10.128803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-04-01 19:50:10.128811 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-04-01 19:50:10.128826 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-04-01 19:50:10.128835 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-04-01 19:50:10.128843 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-04-01 19:50:10.128852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-04-01 19:50:10.128861 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-04-01 19:50:10.128869 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-04-01 19:50:10.128878 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-04-01 19:50:10.128887 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-04-01 19:50:10.128899 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-04-01 19:50:10.128908 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-04-01 19:50:10.128917 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-04-01 19:50:10.128925 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-04-01 19:50:10.128934 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-01 19:50:10.128942 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-04-01 19:50:10.128951 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-04-01 19:50:10.128960 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-01 19:50:10.128968 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-01 19:50:10.128977 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-01 19:50:10.128985 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-01 19:50:10.128994 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-01 19:50:10.129002 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-01 19:50:10.129011 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-01 19:50:10.129019 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-01 19:50:10.129028 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-01 19:50:10.129037 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-01 19:50:10.129045 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-01 19:50:10.129054 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-01 19:50:10.129062 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-01 19:50:10.129071 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-01 19:50:10.129079 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-01 19:50:10.129088 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-01 19:50:10.129097 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-01 19:50:10.129105 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-01 19:50:10.129114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-01 19:50:10.129122 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-01 19:50:10.129131 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-01 19:50:10.129139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-01 19:50:10.129148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-01 19:50:10.129157 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-01 19:50:10.129215 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-01 19:50:10.129233 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-01 19:50:10.129242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-01 19:50:10.129251 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-01 19:50:10.129260 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-01 19:50:10.129268 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-01 19:50:10.129277 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-01 19:50:10.129293 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-01 19:50:10.129302 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-04-01 19:50:10.129311 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-01 19:50:10.129320 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-01 19:50:10.129328 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-04-01 19:50:10.129337 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-01 19:50:10.129346 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-04-01 19:50:10.129354 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-04-01 19:50:10.129363 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-04-01 19:50:10.129372 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-04-01 19:50:10.129380 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-04-01 19:50:10.129389 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-04-01 19:50:10.129398 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-04-01 19:50:10.129406 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-04-01 19:50:10.129415 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-04-01 19:50:10.129423 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-04-01 19:50:10.129432 | orchestrator | 2025-04-01 19:50:10.129440 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.129453 | orchestrator | Tuesday 01 April 2025 19:40:00 +0000 (0:00:05.981) 0:03:48.220 ********* 2025-04-01 19:50:10.129461 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.129470 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.129479 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.129488 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.129497 | orchestrator | 2025-04-01 19:50:10.129505 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-04-01 19:50:10.129514 | orchestrator | Tuesday 01 April 2025 19:40:02 +0000 (0:00:01.538) 0:03:49.758 ********* 2025-04-01 19:50:10.129523 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.129531 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.129553 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.129562 | orchestrator | 2025-04-01 19:50:10.129571 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-04-01 19:50:10.129579 | orchestrator | Tuesday 01 April 2025 19:40:03 +0000 (0:00:01.229) 0:03:50.988 ********* 2025-04-01 19:50:10.129588 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.129597 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.129610 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.129619 | orchestrator | 2025-04-01 19:50:10.129642 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.129651 | orchestrator | Tuesday 01 April 2025 19:40:04 +0000 (0:00:01.470) 0:03:52.459 ********* 2025-04-01 19:50:10.129660 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.129669 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.129678 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.129687 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.129695 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.129704 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.129713 | orchestrator | 2025-04-01 19:50:10.129722 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.129730 | orchestrator | Tuesday 01 April 2025 19:40:05 +0000 (0:00:01.085) 0:03:53.545 ********* 2025-04-01 19:50:10.129739 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.129748 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.129757 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.129766 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.129774 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.129783 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.129792 | orchestrator | 2025-04-01 19:50:10.129801 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.129809 | orchestrator | Tuesday 01 April 2025 19:40:06 +0000 (0:00:00.752) 0:03:54.298 ********* 2025-04-01 19:50:10.129818 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.129878 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.129890 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.129900 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.129910 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.129919 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.129928 | orchestrator | 2025-04-01 19:50:10.129937 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.129947 | orchestrator | Tuesday 01 April 2025 19:40:07 +0000 (0:00:01.073) 0:03:55.371 ********* 2025-04-01 19:50:10.129955 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.129965 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.129974 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.129983 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.129992 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.130001 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.130010 | orchestrator | 2025-04-01 19:50:10.130039 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.130048 | orchestrator | Tuesday 01 April 2025 19:40:08 +0000 (0:00:00.780) 0:03:56.151 ********* 2025-04-01 19:50:10.130057 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130067 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130076 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130085 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.130094 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.130103 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.130112 | orchestrator | 2025-04-01 19:50:10.130121 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.130130 | orchestrator | Tuesday 01 April 2025 19:40:09 +0000 (0:00:00.964) 0:03:57.116 ********* 2025-04-01 19:50:10.130139 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130148 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130157 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130166 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.130175 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.130184 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.130199 | orchestrator | 2025-04-01 19:50:10.130208 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.130217 | orchestrator | Tuesday 01 April 2025 19:40:10 +0000 (0:00:00.730) 0:03:57.846 ********* 2025-04-01 19:50:10.130227 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130236 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130250 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130259 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.130269 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.130278 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.130287 | orchestrator | 2025-04-01 19:50:10.130296 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.130305 | orchestrator | Tuesday 01 April 2025 19:40:11 +0000 (0:00:01.105) 0:03:58.951 ********* 2025-04-01 19:50:10.130314 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130323 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130332 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130341 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.130350 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.130359 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.130368 | orchestrator | 2025-04-01 19:50:10.130377 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.130386 | orchestrator | Tuesday 01 April 2025 19:40:12 +0000 (0:00:00.712) 0:03:59.664 ********* 2025-04-01 19:50:10.130395 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130404 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130413 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130422 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.130431 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.130440 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.130448 | orchestrator | 2025-04-01 19:50:10.130457 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.130466 | orchestrator | Tuesday 01 April 2025 19:40:14 +0000 (0:00:02.141) 0:04:01.805 ********* 2025-04-01 19:50:10.130475 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130484 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130494 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130503 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.130512 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.130521 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.130529 | orchestrator | 2025-04-01 19:50:10.130538 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.130547 | orchestrator | Tuesday 01 April 2025 19:40:14 +0000 (0:00:00.775) 0:04:02.581 ********* 2025-04-01 19:50:10.130557 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.130566 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.130575 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130584 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.130596 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.130605 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130615 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.130661 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.130671 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130680 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.130689 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.130698 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.130707 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.130715 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.130724 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.130733 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.130747 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.130756 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.130764 | orchestrator | 2025-04-01 19:50:10.130773 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.130837 | orchestrator | Tuesday 01 April 2025 19:40:15 +0000 (0:00:01.012) 0:04:03.593 ********* 2025-04-01 19:50:10.130849 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-01 19:50:10.130863 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-01 19:50:10.130873 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.130882 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-01 19:50:10.130891 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-01 19:50:10.130900 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.130909 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-01 19:50:10.130918 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-01 19:50:10.130927 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.130936 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-04-01 19:50:10.130945 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-04-01 19:50:10.130954 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-04-01 19:50:10.130963 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-04-01 19:50:10.130972 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-04-01 19:50:10.130981 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-04-01 19:50:10.130990 | orchestrator | 2025-04-01 19:50:10.130999 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.131008 | orchestrator | Tuesday 01 April 2025 19:40:16 +0000 (0:00:00.746) 0:04:04.339 ********* 2025-04-01 19:50:10.131017 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131026 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131035 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131043 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.131052 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.131061 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.131070 | orchestrator | 2025-04-01 19:50:10.131079 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.131088 | orchestrator | Tuesday 01 April 2025 19:40:17 +0000 (0:00:01.010) 0:04:05.350 ********* 2025-04-01 19:50:10.131098 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131106 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131115 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131124 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.131133 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.131142 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.131151 | orchestrator | 2025-04-01 19:50:10.131160 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.131169 | orchestrator | Tuesday 01 April 2025 19:40:18 +0000 (0:00:00.685) 0:04:06.036 ********* 2025-04-01 19:50:10.131178 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131187 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131195 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131203 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.131212 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.131220 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.131228 | orchestrator | 2025-04-01 19:50:10.131236 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.131245 | orchestrator | Tuesday 01 April 2025 19:40:19 +0000 (0:00:01.063) 0:04:07.099 ********* 2025-04-01 19:50:10.131253 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131265 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131278 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131287 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.131295 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.131303 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.131312 | orchestrator | 2025-04-01 19:50:10.131324 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.131332 | orchestrator | Tuesday 01 April 2025 19:40:20 +0000 (0:00:00.827) 0:04:07.926 ********* 2025-04-01 19:50:10.131341 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131349 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131357 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131365 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.131374 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.131382 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.131390 | orchestrator | 2025-04-01 19:50:10.131399 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.131407 | orchestrator | Tuesday 01 April 2025 19:40:21 +0000 (0:00:01.144) 0:04:09.071 ********* 2025-04-01 19:50:10.131415 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131424 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131445 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131453 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.131462 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.131471 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.131480 | orchestrator | 2025-04-01 19:50:10.131489 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.131499 | orchestrator | Tuesday 01 April 2025 19:40:22 +0000 (0:00:00.791) 0:04:09.863 ********* 2025-04-01 19:50:10.131508 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.131517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.131526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.131535 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131545 | orchestrator | 2025-04-01 19:50:10.131553 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.131563 | orchestrator | Tuesday 01 April 2025 19:40:22 +0000 (0:00:00.711) 0:04:10.575 ********* 2025-04-01 19:50:10.131572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.131581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.131590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.131599 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131608 | orchestrator | 2025-04-01 19:50:10.131677 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.131689 | orchestrator | Tuesday 01 April 2025 19:40:23 +0000 (0:00:00.811) 0:04:11.386 ********* 2025-04-01 19:50:10.131699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.131708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.131717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.131725 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131734 | orchestrator | 2025-04-01 19:50:10.131743 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.131752 | orchestrator | Tuesday 01 April 2025 19:40:24 +0000 (0:00:00.479) 0:04:11.865 ********* 2025-04-01 19:50:10.131761 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131770 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131779 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131788 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.131796 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.131805 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.131814 | orchestrator | 2025-04-01 19:50:10.131823 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.131837 | orchestrator | Tuesday 01 April 2025 19:40:25 +0000 (0:00:00.748) 0:04:12.613 ********* 2025-04-01 19:50:10.131846 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.131854 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131862 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.131870 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.131878 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131886 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131894 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-01 19:50:10.131902 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-01 19:50:10.131910 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-01 19:50:10.131918 | orchestrator | 2025-04-01 19:50:10.131926 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.131934 | orchestrator | Tuesday 01 April 2025 19:40:26 +0000 (0:00:01.386) 0:04:14.000 ********* 2025-04-01 19:50:10.131942 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.131950 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.131959 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.131967 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.131975 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.131983 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.131991 | orchestrator | 2025-04-01 19:50:10.131999 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.132007 | orchestrator | Tuesday 01 April 2025 19:40:27 +0000 (0:00:00.682) 0:04:14.682 ********* 2025-04-01 19:50:10.132015 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.132023 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.132031 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.132039 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.132047 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.132055 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.132063 | orchestrator | 2025-04-01 19:50:10.132071 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.132079 | orchestrator | Tuesday 01 April 2025 19:40:28 +0000 (0:00:01.079) 0:04:15.762 ********* 2025-04-01 19:50:10.132088 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.132096 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.132103 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.132111 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.132120 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.132128 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.132136 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.132144 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.132152 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.132159 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.132167 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.132176 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.132184 | orchestrator | 2025-04-01 19:50:10.132192 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.132200 | orchestrator | Tuesday 01 April 2025 19:40:29 +0000 (0:00:01.223) 0:04:16.986 ********* 2025-04-01 19:50:10.132208 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.132220 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.132229 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.132237 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.132245 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.132253 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.132261 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.132273 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.132281 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.132289 | orchestrator | 2025-04-01 19:50:10.132297 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.132305 | orchestrator | Tuesday 01 April 2025 19:40:30 +0000 (0:00:01.029) 0:04:18.015 ********* 2025-04-01 19:50:10.132313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.132322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.132330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.132338 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.132346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-01 19:50:10.132399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-01 19:50:10.132411 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-01 19:50:10.132419 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.132427 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-01 19:50:10.132435 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-01 19:50:10.132443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-01 19:50:10.132451 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.132460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.132468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.132476 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.132484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.132492 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.132500 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.132508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.132516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.132524 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.132532 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.132540 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.132548 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.132556 | orchestrator | 2025-04-01 19:50:10.132564 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.132572 | orchestrator | Tuesday 01 April 2025 19:40:32 +0000 (0:00:01.817) 0:04:19.833 ********* 2025-04-01 19:50:10.132580 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.132588 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.132596 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.132604 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.132613 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.132621 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.132641 | orchestrator | 2025-04-01 19:50:10.132650 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-01 19:50:10.132658 | orchestrator | Tuesday 01 April 2025 19:40:38 +0000 (0:00:05.826) 0:04:25.659 ********* 2025-04-01 19:50:10.132666 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.132674 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.132682 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.132691 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.132699 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.132707 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.132715 | orchestrator | 2025-04-01 19:50:10.132723 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-01 19:50:10.132731 | orchestrator | Tuesday 01 April 2025 19:40:39 +0000 (0:00:01.394) 0:04:27.054 ********* 2025-04-01 19:50:10.132744 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.132753 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.132761 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.132769 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.132777 | orchestrator | 2025-04-01 19:50:10.132785 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-01 19:50:10.132794 | orchestrator | Tuesday 01 April 2025 19:40:40 +0000 (0:00:01.195) 0:04:28.250 ********* 2025-04-01 19:50:10.132802 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.132810 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.132818 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.132826 | orchestrator | 2025-04-01 19:50:10.132841 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-04-01 19:50:10.132850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.132858 | orchestrator | 2025-04-01 19:50:10.132866 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-01 19:50:10.132874 | orchestrator | Tuesday 01 April 2025 19:40:42 +0000 (0:00:01.661) 0:04:29.912 ********* 2025-04-01 19:50:10.132882 | orchestrator | 2025-04-01 19:50:10.132890 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-04-01 19:50:10.132899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.132907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.132915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.132923 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.132931 | orchestrator | 2025-04-01 19:50:10.132939 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-01 19:50:10.132947 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.132956 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.132964 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.132972 | orchestrator | 2025-04-01 19:50:10.132980 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-01 19:50:10.132988 | orchestrator | Tuesday 01 April 2025 19:40:43 +0000 (0:00:01.324) 0:04:31.236 ********* 2025-04-01 19:50:10.132996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.133008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.133016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.133024 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.133043 | orchestrator | 2025-04-01 19:50:10.133053 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-01 19:50:10.133063 | orchestrator | Tuesday 01 April 2025 19:40:44 +0000 (0:00:01.094) 0:04:32.330 ********* 2025-04-01 19:50:10.133072 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.133081 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.133090 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.133099 | orchestrator | 2025-04-01 19:50:10.133108 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-04-01 19:50:10.133165 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133177 | orchestrator | 2025-04-01 19:50:10.133186 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-01 19:50:10.133195 | orchestrator | Tuesday 01 April 2025 19:40:45 +0000 (0:00:01.040) 0:04:33.371 ********* 2025-04-01 19:50:10.133204 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.133213 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.133222 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.133230 | orchestrator | 2025-04-01 19:50:10.133238 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-04-01 19:50:10.133246 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133260 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.133268 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.133276 | orchestrator | 2025-04-01 19:50:10.133285 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-01 19:50:10.133293 | orchestrator | Tuesday 01 April 2025 19:40:46 +0000 (0:00:00.716) 0:04:34.087 ********* 2025-04-01 19:50:10.133301 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.133309 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.133317 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.133325 | orchestrator | 2025-04-01 19:50:10.133333 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-04-01 19:50:10.133341 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133349 | orchestrator | 2025-04-01 19:50:10.133358 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-01 19:50:10.133366 | orchestrator | Tuesday 01 April 2025 19:40:47 +0000 (0:00:00.858) 0:04:34.946 ********* 2025-04-01 19:50:10.133374 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.133382 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.133390 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.133402 | orchestrator | 2025-04-01 19:50:10.133410 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-04-01 19:50:10.133418 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133426 | orchestrator | 2025-04-01 19:50:10.133435 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-01 19:50:10.133443 | orchestrator | Tuesday 01 April 2025 19:40:48 +0000 (0:00:00.797) 0:04:35.743 ********* 2025-04-01 19:50:10.133451 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133459 | orchestrator | 2025-04-01 19:50:10.133467 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-01 19:50:10.133476 | orchestrator | Tuesday 01 April 2025 19:40:48 +0000 (0:00:00.137) 0:04:35.881 ********* 2025-04-01 19:50:10.133484 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.133492 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.133500 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.133508 | orchestrator | 2025-04-01 19:50:10.133516 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-04-01 19:50:10.133524 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133532 | orchestrator | 2025-04-01 19:50:10.133540 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-01 19:50:10.133549 | orchestrator | Tuesday 01 April 2025 19:40:49 +0000 (0:00:01.028) 0:04:36.909 ********* 2025-04-01 19:50:10.133557 | orchestrator | 2025-04-01 19:50:10.133565 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-04-01 19:50:10.133573 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.133589 | orchestrator | 2025-04-01 19:50:10.133597 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-01 19:50:10.133605 | orchestrator | Tuesday 01 April 2025 19:40:50 +0000 (0:00:00.881) 0:04:37.791 ********* 2025-04-01 19:50:10.133613 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.133634 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.133643 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.133651 | orchestrator | 2025-04-01 19:50:10.133659 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-04-01 19:50:10.133667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.133675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.133683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.133692 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133700 | orchestrator | 2025-04-01 19:50:10.133708 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-01 19:50:10.133724 | orchestrator | Tuesday 01 April 2025 19:40:51 +0000 (0:00:01.338) 0:04:39.129 ********* 2025-04-01 19:50:10.133732 | orchestrator | 2025-04-01 19:50:10.133740 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-04-01 19:50:10.133748 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133757 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.133765 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.133773 | orchestrator | 2025-04-01 19:50:10.133781 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-01 19:50:10.133789 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.133797 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.133805 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.133814 | orchestrator | 2025-04-01 19:50:10.133822 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-01 19:50:10.133830 | orchestrator | Tuesday 01 April 2025 19:40:53 +0000 (0:00:01.569) 0:04:40.699 ********* 2025-04-01 19:50:10.133838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.133846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.133854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.133862 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.133871 | orchestrator | 2025-04-01 19:50:10.133879 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-01 19:50:10.133887 | orchestrator | Tuesday 01 April 2025 19:40:54 +0000 (0:00:00.988) 0:04:41.687 ********* 2025-04-01 19:50:10.133895 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.133903 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.133911 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.133919 | orchestrator | 2025-04-01 19:50:10.133973 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-04-01 19:50:10.133985 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.133993 | orchestrator | 2025-04-01 19:50:10.134001 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-01 19:50:10.134009 | orchestrator | Tuesday 01 April 2025 19:40:55 +0000 (0:00:01.062) 0:04:42.749 ********* 2025-04-01 19:50:10.134035 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.134045 | orchestrator | 2025-04-01 19:50:10.134054 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-01 19:50:10.134062 | orchestrator | Tuesday 01 April 2025 19:40:55 +0000 (0:00:00.646) 0:04:43.396 ********* 2025-04-01 19:50:10.134070 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.134078 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.134087 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.134095 | orchestrator | 2025-04-01 19:50:10.134103 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-04-01 19:50:10.134112 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.134120 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.134128 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.134137 | orchestrator | 2025-04-01 19:50:10.134145 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-01 19:50:10.134154 | orchestrator | Tuesday 01 April 2025 19:40:57 +0000 (0:00:01.398) 0:04:44.794 ********* 2025-04-01 19:50:10.134162 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.134170 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.134178 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.134187 | orchestrator | 2025-04-01 19:50:10.134195 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.134203 | orchestrator | Tuesday 01 April 2025 19:40:58 +0000 (0:00:01.472) 0:04:46.267 ********* 2025-04-01 19:50:10.134212 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.134220 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.134229 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.134246 | orchestrator | 2025-04-01 19:50:10.134255 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-01 19:50:10.134263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.134271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.134280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.134288 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.134297 | orchestrator | 2025-04-01 19:50:10.134305 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-01 19:50:10.134313 | orchestrator | Tuesday 01 April 2025 19:41:00 +0000 (0:00:01.363) 0:04:47.630 ********* 2025-04-01 19:50:10.134321 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.134330 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.134338 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.134347 | orchestrator | 2025-04-01 19:50:10.134355 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-01 19:50:10.134363 | orchestrator | Tuesday 01 April 2025 19:41:01 +0000 (0:00:01.062) 0:04:48.693 ********* 2025-04-01 19:50:10.134372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.134380 | orchestrator | 2025-04-01 19:50:10.134389 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-01 19:50:10.134397 | orchestrator | Tuesday 01 April 2025 19:41:01 +0000 (0:00:00.625) 0:04:49.319 ********* 2025-04-01 19:50:10.134405 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.134414 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.134422 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.134430 | orchestrator | 2025-04-01 19:50:10.134439 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-01 19:50:10.134447 | orchestrator | Tuesday 01 April 2025 19:41:02 +0000 (0:00:00.576) 0:04:49.895 ********* 2025-04-01 19:50:10.134455 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.134463 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.134472 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.134480 | orchestrator | 2025-04-01 19:50:10.134488 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-01 19:50:10.134497 | orchestrator | Tuesday 01 April 2025 19:41:03 +0000 (0:00:01.208) 0:04:51.104 ********* 2025-04-01 19:50:10.134505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.134513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.134522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.134530 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.134538 | orchestrator | 2025-04-01 19:50:10.134547 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-01 19:50:10.134555 | orchestrator | Tuesday 01 April 2025 19:41:04 +0000 (0:00:00.715) 0:04:51.819 ********* 2025-04-01 19:50:10.134563 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.134572 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.134580 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.134588 | orchestrator | 2025-04-01 19:50:10.134597 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-01 19:50:10.134605 | orchestrator | Tuesday 01 April 2025 19:41:04 +0000 (0:00:00.338) 0:04:52.158 ********* 2025-04-01 19:50:10.134614 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.134670 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.134682 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.134691 | orchestrator | 2025-04-01 19:50:10.134705 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-01 19:50:10.134714 | orchestrator | Tuesday 01 April 2025 19:41:05 +0000 (0:00:00.654) 0:04:52.813 ********* 2025-04-01 19:50:10.134724 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.134737 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.134747 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.134762 | orchestrator | 2025-04-01 19:50:10.134771 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-01 19:50:10.134834 | orchestrator | Tuesday 01 April 2025 19:41:05 +0000 (0:00:00.635) 0:04:53.448 ********* 2025-04-01 19:50:10.134847 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.134856 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.134865 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.134874 | orchestrator | 2025-04-01 19:50:10.134883 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.134893 | orchestrator | Tuesday 01 April 2025 19:41:06 +0000 (0:00:00.374) 0:04:53.822 ********* 2025-04-01 19:50:10.134902 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.134911 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.134920 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.134929 | orchestrator | 2025-04-01 19:50:10.134937 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-04-01 19:50:10.134946 | orchestrator | 2025-04-01 19:50:10.134956 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.134965 | orchestrator | Tuesday 01 April 2025 19:41:08 +0000 (0:00:02.529) 0:04:56.352 ********* 2025-04-01 19:50:10.134973 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.134981 | orchestrator | 2025-04-01 19:50:10.134990 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.134998 | orchestrator | Tuesday 01 April 2025 19:41:09 +0000 (0:00:00.754) 0:04:57.106 ********* 2025-04-01 19:50:10.135006 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.135014 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.135023 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.135031 | orchestrator | 2025-04-01 19:50:10.135039 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.135047 | orchestrator | Tuesday 01 April 2025 19:41:10 +0000 (0:00:00.799) 0:04:57.905 ********* 2025-04-01 19:50:10.135056 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135064 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135072 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135081 | orchestrator | 2025-04-01 19:50:10.135089 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.135098 | orchestrator | Tuesday 01 April 2025 19:41:11 +0000 (0:00:00.730) 0:04:58.636 ********* 2025-04-01 19:50:10.135106 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135114 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135122 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135129 | orchestrator | 2025-04-01 19:50:10.135137 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.135144 | orchestrator | Tuesday 01 April 2025 19:41:11 +0000 (0:00:00.382) 0:04:59.018 ********* 2025-04-01 19:50:10.135151 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135158 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135165 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135172 | orchestrator | 2025-04-01 19:50:10.135180 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.135187 | orchestrator | Tuesday 01 April 2025 19:41:11 +0000 (0:00:00.396) 0:04:59.414 ********* 2025-04-01 19:50:10.135194 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.135201 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.135209 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.135216 | orchestrator | 2025-04-01 19:50:10.135223 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.135230 | orchestrator | Tuesday 01 April 2025 19:41:12 +0000 (0:00:00.861) 0:05:00.276 ********* 2025-04-01 19:50:10.135237 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135245 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135257 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135264 | orchestrator | 2025-04-01 19:50:10.135271 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.135279 | orchestrator | Tuesday 01 April 2025 19:41:13 +0000 (0:00:00.711) 0:05:00.988 ********* 2025-04-01 19:50:10.135286 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135293 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135300 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135308 | orchestrator | 2025-04-01 19:50:10.135315 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.135322 | orchestrator | Tuesday 01 April 2025 19:41:13 +0000 (0:00:00.366) 0:05:01.354 ********* 2025-04-01 19:50:10.135329 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135337 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135344 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135351 | orchestrator | 2025-04-01 19:50:10.135358 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.135365 | orchestrator | Tuesday 01 April 2025 19:41:14 +0000 (0:00:00.882) 0:05:02.236 ********* 2025-04-01 19:50:10.135372 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135380 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135387 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135394 | orchestrator | 2025-04-01 19:50:10.135401 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.135408 | orchestrator | Tuesday 01 April 2025 19:41:15 +0000 (0:00:00.655) 0:05:02.892 ********* 2025-04-01 19:50:10.135416 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135423 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135430 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135437 | orchestrator | 2025-04-01 19:50:10.135444 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.135452 | orchestrator | Tuesday 01 April 2025 19:41:16 +0000 (0:00:00.980) 0:05:03.872 ********* 2025-04-01 19:50:10.135459 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.135467 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.135474 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.135481 | orchestrator | 2025-04-01 19:50:10.135488 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.135499 | orchestrator | Tuesday 01 April 2025 19:41:17 +0000 (0:00:01.027) 0:05:04.900 ********* 2025-04-01 19:50:10.135506 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135514 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135521 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135528 | orchestrator | 2025-04-01 19:50:10.135576 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.135586 | orchestrator | Tuesday 01 April 2025 19:41:17 +0000 (0:00:00.405) 0:05:05.306 ********* 2025-04-01 19:50:10.135594 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.135601 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.135609 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.135616 | orchestrator | 2025-04-01 19:50:10.135639 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.135647 | orchestrator | Tuesday 01 April 2025 19:41:18 +0000 (0:00:00.364) 0:05:05.670 ********* 2025-04-01 19:50:10.135654 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135661 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135668 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135679 | orchestrator | 2025-04-01 19:50:10.135687 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.135694 | orchestrator | Tuesday 01 April 2025 19:41:18 +0000 (0:00:00.765) 0:05:06.436 ********* 2025-04-01 19:50:10.135701 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135708 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135715 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135726 | orchestrator | 2025-04-01 19:50:10.135734 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.135741 | orchestrator | Tuesday 01 April 2025 19:41:19 +0000 (0:00:00.422) 0:05:06.859 ********* 2025-04-01 19:50:10.135748 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135755 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135762 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135769 | orchestrator | 2025-04-01 19:50:10.135776 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.135783 | orchestrator | Tuesday 01 April 2025 19:41:19 +0000 (0:00:00.335) 0:05:07.194 ********* 2025-04-01 19:50:10.135790 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135797 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135804 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135811 | orchestrator | 2025-04-01 19:50:10.135818 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.135825 | orchestrator | Tuesday 01 April 2025 19:41:19 +0000 (0:00:00.377) 0:05:07.572 ********* 2025-04-01 19:50:10.135832 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135839 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135846 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135853 | orchestrator | 2025-04-01 19:50:10.135860 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.135867 | orchestrator | Tuesday 01 April 2025 19:41:20 +0000 (0:00:00.669) 0:05:08.241 ********* 2025-04-01 19:50:10.135875 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.135882 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.135889 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.135896 | orchestrator | 2025-04-01 19:50:10.135903 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.135910 | orchestrator | Tuesday 01 April 2025 19:41:21 +0000 (0:00:00.417) 0:05:08.658 ********* 2025-04-01 19:50:10.135917 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.135924 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.135931 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.135938 | orchestrator | 2025-04-01 19:50:10.135945 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.135953 | orchestrator | Tuesday 01 April 2025 19:41:21 +0000 (0:00:00.371) 0:05:09.030 ********* 2025-04-01 19:50:10.135960 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.135967 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.135974 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.135981 | orchestrator | 2025-04-01 19:50:10.135988 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.135995 | orchestrator | Tuesday 01 April 2025 19:41:22 +0000 (0:00:00.659) 0:05:09.689 ********* 2025-04-01 19:50:10.136002 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136009 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136016 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136023 | orchestrator | 2025-04-01 19:50:10.136030 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.136037 | orchestrator | Tuesday 01 April 2025 19:41:22 +0000 (0:00:00.368) 0:05:10.057 ********* 2025-04-01 19:50:10.136044 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136051 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136058 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136065 | orchestrator | 2025-04-01 19:50:10.136072 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.136080 | orchestrator | Tuesday 01 April 2025 19:41:22 +0000 (0:00:00.381) 0:05:10.438 ********* 2025-04-01 19:50:10.136087 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136094 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136101 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136112 | orchestrator | 2025-04-01 19:50:10.136119 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.136133 | orchestrator | Tuesday 01 April 2025 19:41:23 +0000 (0:00:00.361) 0:05:10.799 ********* 2025-04-01 19:50:10.136140 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136147 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136154 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136161 | orchestrator | 2025-04-01 19:50:10.136168 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.136176 | orchestrator | Tuesday 01 April 2025 19:41:23 +0000 (0:00:00.629) 0:05:11.429 ********* 2025-04-01 19:50:10.136183 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136190 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136197 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136218 | orchestrator | 2025-04-01 19:50:10.136226 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.136234 | orchestrator | Tuesday 01 April 2025 19:41:24 +0000 (0:00:00.366) 0:05:11.796 ********* 2025-04-01 19:50:10.136242 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136250 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136257 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136265 | orchestrator | 2025-04-01 19:50:10.136317 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.136328 | orchestrator | Tuesday 01 April 2025 19:41:24 +0000 (0:00:00.418) 0:05:12.214 ********* 2025-04-01 19:50:10.136336 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136344 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136352 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136360 | orchestrator | 2025-04-01 19:50:10.136367 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.136379 | orchestrator | Tuesday 01 April 2025 19:41:24 +0000 (0:00:00.335) 0:05:12.550 ********* 2025-04-01 19:50:10.136387 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136394 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136402 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136410 | orchestrator | 2025-04-01 19:50:10.136418 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.136425 | orchestrator | Tuesday 01 April 2025 19:41:25 +0000 (0:00:00.667) 0:05:13.218 ********* 2025-04-01 19:50:10.136437 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136445 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136453 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136461 | orchestrator | 2025-04-01 19:50:10.136468 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.136476 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:00.410) 0:05:13.629 ********* 2025-04-01 19:50:10.136484 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136492 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136499 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136510 | orchestrator | 2025-04-01 19:50:10.136518 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.136525 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:00.363) 0:05:13.992 ********* 2025-04-01 19:50:10.136533 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136541 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136549 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136556 | orchestrator | 2025-04-01 19:50:10.136564 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.136572 | orchestrator | Tuesday 01 April 2025 19:41:26 +0000 (0:00:00.357) 0:05:14.350 ********* 2025-04-01 19:50:10.136580 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.136587 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.136594 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136605 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.136613 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.136620 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136660 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.136667 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.136674 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136681 | orchestrator | 2025-04-01 19:50:10.136688 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.136695 | orchestrator | Tuesday 01 April 2025 19:41:27 +0000 (0:00:00.696) 0:05:15.046 ********* 2025-04-01 19:50:10.136703 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-01 19:50:10.136710 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-01 19:50:10.136717 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136724 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-01 19:50:10.136731 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-01 19:50:10.136738 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136745 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-01 19:50:10.136752 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-01 19:50:10.136759 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136766 | orchestrator | 2025-04-01 19:50:10.136774 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.136781 | orchestrator | Tuesday 01 April 2025 19:41:27 +0000 (0:00:00.422) 0:05:15.469 ********* 2025-04-01 19:50:10.136788 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136795 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136802 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136809 | orchestrator | 2025-04-01 19:50:10.136816 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.136823 | orchestrator | Tuesday 01 April 2025 19:41:28 +0000 (0:00:00.413) 0:05:15.882 ********* 2025-04-01 19:50:10.136830 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136838 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136845 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136852 | orchestrator | 2025-04-01 19:50:10.136859 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.136866 | orchestrator | Tuesday 01 April 2025 19:41:28 +0000 (0:00:00.413) 0:05:16.296 ********* 2025-04-01 19:50:10.136873 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136881 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136888 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136895 | orchestrator | 2025-04-01 19:50:10.136902 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.136909 | orchestrator | Tuesday 01 April 2025 19:41:29 +0000 (0:00:00.808) 0:05:17.104 ********* 2025-04-01 19:50:10.136916 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.136923 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.136930 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.136937 | orchestrator | 2025-04-01 19:50:10.136944 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.136951 | orchestrator | Tuesday 01 April 2025 19:41:30 +0000 (0:00:00.538) 0:05:17.642 ********* 2025-04-01 19:50:10.136958 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137008 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137018 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137026 | orchestrator | 2025-04-01 19:50:10.137033 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.137040 | orchestrator | Tuesday 01 April 2025 19:41:30 +0000 (0:00:00.757) 0:05:18.401 ********* 2025-04-01 19:50:10.137047 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137060 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137067 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137074 | orchestrator | 2025-04-01 19:50:10.137081 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.137088 | orchestrator | Tuesday 01 April 2025 19:41:31 +0000 (0:00:00.573) 0:05:18.975 ********* 2025-04-01 19:50:10.137095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.137102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.137109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.137116 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137123 | orchestrator | 2025-04-01 19:50:10.137130 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.137138 | orchestrator | Tuesday 01 April 2025 19:41:32 +0000 (0:00:01.184) 0:05:20.159 ********* 2025-04-01 19:50:10.137145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.137152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.137159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.137166 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137173 | orchestrator | 2025-04-01 19:50:10.137180 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.137186 | orchestrator | Tuesday 01 April 2025 19:41:33 +0000 (0:00:00.499) 0:05:20.658 ********* 2025-04-01 19:50:10.137193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.137199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.137205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.137212 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137218 | orchestrator | 2025-04-01 19:50:10.137224 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.137231 | orchestrator | Tuesday 01 April 2025 19:41:33 +0000 (0:00:00.520) 0:05:21.178 ********* 2025-04-01 19:50:10.137237 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137243 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137249 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137255 | orchestrator | 2025-04-01 19:50:10.137262 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.137268 | orchestrator | Tuesday 01 April 2025 19:41:33 +0000 (0:00:00.408) 0:05:21.587 ********* 2025-04-01 19:50:10.137274 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.137281 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137287 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.137293 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137299 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.137305 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137312 | orchestrator | 2025-04-01 19:50:10.137318 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.137328 | orchestrator | Tuesday 01 April 2025 19:41:34 +0000 (0:00:00.577) 0:05:22.165 ********* 2025-04-01 19:50:10.137334 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137341 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137347 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137353 | orchestrator | 2025-04-01 19:50:10.137359 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.137365 | orchestrator | Tuesday 01 April 2025 19:41:35 +0000 (0:00:00.697) 0:05:22.863 ********* 2025-04-01 19:50:10.137372 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137378 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137384 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137390 | orchestrator | 2025-04-01 19:50:10.137397 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.137408 | orchestrator | Tuesday 01 April 2025 19:41:35 +0000 (0:00:00.444) 0:05:23.307 ********* 2025-04-01 19:50:10.137415 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.137421 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137427 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.137434 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137440 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.137446 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137452 | orchestrator | 2025-04-01 19:50:10.137459 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.137465 | orchestrator | Tuesday 01 April 2025 19:41:36 +0000 (0:00:00.634) 0:05:23.942 ********* 2025-04-01 19:50:10.137482 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137489 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137495 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137502 | orchestrator | 2025-04-01 19:50:10.137508 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.137515 | orchestrator | Tuesday 01 April 2025 19:41:36 +0000 (0:00:00.407) 0:05:24.349 ********* 2025-04-01 19:50:10.137521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.137528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.137534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.137541 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-01 19:50:10.137547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-01 19:50:10.137553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-01 19:50:10.137560 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137582 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-01 19:50:10.137600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-01 19:50:10.137607 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-01 19:50:10.137613 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137636 | orchestrator | 2025-04-01 19:50:10.137643 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.137650 | orchestrator | Tuesday 01 April 2025 19:41:37 +0000 (0:00:01.028) 0:05:25.378 ********* 2025-04-01 19:50:10.137657 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137664 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137671 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137677 | orchestrator | 2025-04-01 19:50:10.137684 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.137691 | orchestrator | Tuesday 01 April 2025 19:41:38 +0000 (0:00:00.618) 0:05:25.997 ********* 2025-04-01 19:50:10.137698 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137705 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137712 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137719 | orchestrator | 2025-04-01 19:50:10.137726 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.137733 | orchestrator | Tuesday 01 April 2025 19:41:39 +0000 (0:00:00.873) 0:05:26.870 ********* 2025-04-01 19:50:10.137739 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137746 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137753 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137760 | orchestrator | 2025-04-01 19:50:10.137767 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.137773 | orchestrator | Tuesday 01 April 2025 19:41:39 +0000 (0:00:00.573) 0:05:27.444 ********* 2025-04-01 19:50:10.137780 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137787 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.137794 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.137805 | orchestrator | 2025-04-01 19:50:10.137812 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-04-01 19:50:10.137819 | orchestrator | Tuesday 01 April 2025 19:41:40 +0000 (0:00:00.961) 0:05:28.406 ********* 2025-04-01 19:50:10.137826 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.137833 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.137840 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.137846 | orchestrator | 2025-04-01 19:50:10.137853 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-04-01 19:50:10.137860 | orchestrator | Tuesday 01 April 2025 19:41:41 +0000 (0:00:00.408) 0:05:28.814 ********* 2025-04-01 19:50:10.137867 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.137873 | orchestrator | 2025-04-01 19:50:10.137880 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-04-01 19:50:10.137887 | orchestrator | Tuesday 01 April 2025 19:41:42 +0000 (0:00:00.973) 0:05:29.788 ********* 2025-04-01 19:50:10.137894 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.137901 | orchestrator | 2025-04-01 19:50:10.137907 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-04-01 19:50:10.137914 | orchestrator | Tuesday 01 April 2025 19:41:42 +0000 (0:00:00.185) 0:05:29.973 ********* 2025-04-01 19:50:10.137921 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-01 19:50:10.137928 | orchestrator | 2025-04-01 19:50:10.137934 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-04-01 19:50:10.137941 | orchestrator | Tuesday 01 April 2025 19:41:43 +0000 (0:00:00.885) 0:05:30.859 ********* 2025-04-01 19:50:10.137948 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.137955 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.137962 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.137969 | orchestrator | 2025-04-01 19:50:10.137976 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-04-01 19:50:10.137983 | orchestrator | Tuesday 01 April 2025 19:41:43 +0000 (0:00:00.471) 0:05:31.331 ********* 2025-04-01 19:50:10.137989 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.137996 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.138002 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.138009 | orchestrator | 2025-04-01 19:50:10.138030 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-04-01 19:50:10.138038 | orchestrator | Tuesday 01 April 2025 19:41:44 +0000 (0:00:00.564) 0:05:31.896 ********* 2025-04-01 19:50:10.138044 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138051 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138057 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138063 | orchestrator | 2025-04-01 19:50:10.138069 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-04-01 19:50:10.138079 | orchestrator | Tuesday 01 April 2025 19:41:45 +0000 (0:00:01.226) 0:05:33.122 ********* 2025-04-01 19:50:10.138085 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138092 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138098 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138104 | orchestrator | 2025-04-01 19:50:10.138111 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-04-01 19:50:10.138117 | orchestrator | Tuesday 01 April 2025 19:41:46 +0000 (0:00:00.903) 0:05:34.026 ********* 2025-04-01 19:50:10.138123 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138129 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138135 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138142 | orchestrator | 2025-04-01 19:50:10.138148 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-04-01 19:50:10.138154 | orchestrator | Tuesday 01 April 2025 19:41:47 +0000 (0:00:00.711) 0:05:34.737 ********* 2025-04-01 19:50:10.138161 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.138167 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.138177 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.138184 | orchestrator | 2025-04-01 19:50:10.138190 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-04-01 19:50:10.138196 | orchestrator | Tuesday 01 April 2025 19:41:47 +0000 (0:00:00.696) 0:05:35.434 ********* 2025-04-01 19:50:10.138219 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.138226 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.138233 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.138239 | orchestrator | 2025-04-01 19:50:10.138245 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-04-01 19:50:10.138252 | orchestrator | Tuesday 01 April 2025 19:41:48 +0000 (0:00:00.611) 0:05:36.045 ********* 2025-04-01 19:50:10.138258 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.138264 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.138271 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.138277 | orchestrator | 2025-04-01 19:50:10.138283 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-04-01 19:50:10.138289 | orchestrator | Tuesday 01 April 2025 19:41:48 +0000 (0:00:00.396) 0:05:36.442 ********* 2025-04-01 19:50:10.138296 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.138302 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.138308 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.138315 | orchestrator | 2025-04-01 19:50:10.138321 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-04-01 19:50:10.138327 | orchestrator | Tuesday 01 April 2025 19:41:49 +0000 (0:00:00.393) 0:05:36.836 ********* 2025-04-01 19:50:10.138334 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.138340 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.138346 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.138353 | orchestrator | 2025-04-01 19:50:10.138359 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-04-01 19:50:10.138365 | orchestrator | Tuesday 01 April 2025 19:41:49 +0000 (0:00:00.395) 0:05:37.231 ********* 2025-04-01 19:50:10.138372 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138378 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138384 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138391 | orchestrator | 2025-04-01 19:50:10.138397 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-04-01 19:50:10.138407 | orchestrator | Tuesday 01 April 2025 19:41:51 +0000 (0:00:01.432) 0:05:38.663 ********* 2025-04-01 19:50:10.138413 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.138419 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.138426 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.138436 | orchestrator | 2025-04-01 19:50:10.138442 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-04-01 19:50:10.138449 | orchestrator | Tuesday 01 April 2025 19:41:51 +0000 (0:00:00.376) 0:05:39.040 ********* 2025-04-01 19:50:10.138455 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.138462 | orchestrator | 2025-04-01 19:50:10.138468 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-04-01 19:50:10.138475 | orchestrator | Tuesday 01 April 2025 19:41:52 +0000 (0:00:00.699) 0:05:39.740 ********* 2025-04-01 19:50:10.138481 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.138488 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.138494 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.138500 | orchestrator | 2025-04-01 19:50:10.138507 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-04-01 19:50:10.138513 | orchestrator | Tuesday 01 April 2025 19:41:52 +0000 (0:00:00.672) 0:05:40.412 ********* 2025-04-01 19:50:10.138519 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.138526 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.138532 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.138538 | orchestrator | 2025-04-01 19:50:10.138544 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-04-01 19:50:10.138557 | orchestrator | Tuesday 01 April 2025 19:41:53 +0000 (0:00:00.453) 0:05:40.866 ********* 2025-04-01 19:50:10.138563 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.138569 | orchestrator | 2025-04-01 19:50:10.138576 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-04-01 19:50:10.138582 | orchestrator | Tuesday 01 April 2025 19:41:53 +0000 (0:00:00.712) 0:05:41.578 ********* 2025-04-01 19:50:10.138588 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138594 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138601 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138607 | orchestrator | 2025-04-01 19:50:10.138613 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-04-01 19:50:10.138620 | orchestrator | Tuesday 01 April 2025 19:41:55 +0000 (0:00:01.956) 0:05:43.535 ********* 2025-04-01 19:50:10.138637 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138644 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138650 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138657 | orchestrator | 2025-04-01 19:50:10.138663 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-04-01 19:50:10.138669 | orchestrator | Tuesday 01 April 2025 19:41:57 +0000 (0:00:01.345) 0:05:44.880 ********* 2025-04-01 19:50:10.138676 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138682 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138688 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138694 | orchestrator | 2025-04-01 19:50:10.138701 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-04-01 19:50:10.138710 | orchestrator | Tuesday 01 April 2025 19:41:59 +0000 (0:00:01.738) 0:05:46.619 ********* 2025-04-01 19:50:10.138717 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138723 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138729 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138736 | orchestrator | 2025-04-01 19:50:10.138742 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-04-01 19:50:10.138748 | orchestrator | Tuesday 01 April 2025 19:42:01 +0000 (0:00:02.221) 0:05:48.841 ********* 2025-04-01 19:50:10.138755 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.138761 | orchestrator | 2025-04-01 19:50:10.138767 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-04-01 19:50:10.138773 | orchestrator | Tuesday 01 April 2025 19:42:01 +0000 (0:00:00.642) 0:05:49.483 ********* 2025-04-01 19:50:10.138795 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-04-01 19:50:10.138803 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.138809 | orchestrator | 2025-04-01 19:50:10.138815 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-04-01 19:50:10.138822 | orchestrator | Tuesday 01 April 2025 19:42:23 +0000 (0:00:21.496) 0:06:10.980 ********* 2025-04-01 19:50:10.138828 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.138834 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.138841 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.138847 | orchestrator | 2025-04-01 19:50:10.138854 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-04-01 19:50:10.138860 | orchestrator | Tuesday 01 April 2025 19:42:29 +0000 (0:00:06.515) 0:06:17.495 ********* 2025-04-01 19:50:10.138866 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.138873 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.138879 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.138885 | orchestrator | 2025-04-01 19:50:10.138891 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-01 19:50:10.138898 | orchestrator | Tuesday 01 April 2025 19:42:31 +0000 (0:00:01.327) 0:06:18.822 ********* 2025-04-01 19:50:10.138908 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.138915 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.138921 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.138927 | orchestrator | 2025-04-01 19:50:10.138934 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-01 19:50:10.138940 | orchestrator | Tuesday 01 April 2025 19:42:31 +0000 (0:00:00.716) 0:06:19.539 ********* 2025-04-01 19:50:10.138946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.138953 | orchestrator | 2025-04-01 19:50:10.138959 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-01 19:50:10.138965 | orchestrator | Tuesday 01 April 2025 19:42:32 +0000 (0:00:00.845) 0:06:20.385 ********* 2025-04-01 19:50:10.138971 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.138978 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.138984 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.138990 | orchestrator | 2025-04-01 19:50:10.138997 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-01 19:50:10.139003 | orchestrator | Tuesday 01 April 2025 19:42:33 +0000 (0:00:00.383) 0:06:20.768 ********* 2025-04-01 19:50:10.139009 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.139015 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.139022 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.139028 | orchestrator | 2025-04-01 19:50:10.139034 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-01 19:50:10.139041 | orchestrator | Tuesday 01 April 2025 19:42:34 +0000 (0:00:01.412) 0:06:22.180 ********* 2025-04-01 19:50:10.139047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.139053 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.139060 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.139066 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139072 | orchestrator | 2025-04-01 19:50:10.139078 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-01 19:50:10.139085 | orchestrator | Tuesday 01 April 2025 19:42:35 +0000 (0:00:01.087) 0:06:23.267 ********* 2025-04-01 19:50:10.139091 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139097 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139104 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.139110 | orchestrator | 2025-04-01 19:50:10.139116 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.139123 | orchestrator | Tuesday 01 April 2025 19:42:36 +0000 (0:00:00.834) 0:06:24.102 ********* 2025-04-01 19:50:10.139129 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.139135 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.139141 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.139148 | orchestrator | 2025-04-01 19:50:10.139154 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-04-01 19:50:10.139160 | orchestrator | 2025-04-01 19:50:10.139166 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.139173 | orchestrator | Tuesday 01 April 2025 19:42:38 +0000 (0:00:02.321) 0:06:26.423 ********* 2025-04-01 19:50:10.139179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.139186 | orchestrator | 2025-04-01 19:50:10.139192 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.139198 | orchestrator | Tuesday 01 April 2025 19:42:39 +0000 (0:00:00.835) 0:06:27.259 ********* 2025-04-01 19:50:10.139204 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139211 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139217 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.139223 | orchestrator | 2025-04-01 19:50:10.139230 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.139239 | orchestrator | Tuesday 01 April 2025 19:42:40 +0000 (0:00:00.802) 0:06:28.061 ********* 2025-04-01 19:50:10.139246 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139252 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139259 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139265 | orchestrator | 2025-04-01 19:50:10.139271 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.139277 | orchestrator | Tuesday 01 April 2025 19:42:40 +0000 (0:00:00.401) 0:06:28.463 ********* 2025-04-01 19:50:10.139284 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139290 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139296 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139305 | orchestrator | 2025-04-01 19:50:10.139315 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.139321 | orchestrator | Tuesday 01 April 2025 19:42:41 +0000 (0:00:00.710) 0:06:29.174 ********* 2025-04-01 19:50:10.139328 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139347 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139354 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139361 | orchestrator | 2025-04-01 19:50:10.139367 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.139373 | orchestrator | Tuesday 01 April 2025 19:42:41 +0000 (0:00:00.376) 0:06:29.550 ********* 2025-04-01 19:50:10.139380 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139386 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139392 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.139399 | orchestrator | 2025-04-01 19:50:10.139405 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.139412 | orchestrator | Tuesday 01 April 2025 19:42:42 +0000 (0:00:00.732) 0:06:30.282 ********* 2025-04-01 19:50:10.139418 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139425 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139431 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139437 | orchestrator | 2025-04-01 19:50:10.139443 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.139450 | orchestrator | Tuesday 01 April 2025 19:42:43 +0000 (0:00:00.392) 0:06:30.675 ********* 2025-04-01 19:50:10.139456 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139462 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139469 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139475 | orchestrator | 2025-04-01 19:50:10.139481 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.139488 | orchestrator | Tuesday 01 April 2025 19:42:43 +0000 (0:00:00.668) 0:06:31.343 ********* 2025-04-01 19:50:10.139494 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139500 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139506 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139513 | orchestrator | 2025-04-01 19:50:10.139519 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.139525 | orchestrator | Tuesday 01 April 2025 19:42:44 +0000 (0:00:00.360) 0:06:31.703 ********* 2025-04-01 19:50:10.139532 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139538 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139544 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139550 | orchestrator | 2025-04-01 19:50:10.139557 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.139563 | orchestrator | Tuesday 01 April 2025 19:42:44 +0000 (0:00:00.373) 0:06:32.077 ********* 2025-04-01 19:50:10.139569 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139576 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139582 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139588 | orchestrator | 2025-04-01 19:50:10.139595 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.139601 | orchestrator | Tuesday 01 April 2025 19:42:44 +0000 (0:00:00.363) 0:06:32.440 ********* 2025-04-01 19:50:10.139611 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139618 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139635 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.139642 | orchestrator | 2025-04-01 19:50:10.139648 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.139655 | orchestrator | Tuesday 01 April 2025 19:42:45 +0000 (0:00:01.090) 0:06:33.531 ********* 2025-04-01 19:50:10.139661 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139667 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139673 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139680 | orchestrator | 2025-04-01 19:50:10.139686 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.139692 | orchestrator | Tuesday 01 April 2025 19:42:46 +0000 (0:00:00.431) 0:06:33.962 ********* 2025-04-01 19:50:10.139699 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139705 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139711 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.139718 | orchestrator | 2025-04-01 19:50:10.139724 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.139730 | orchestrator | Tuesday 01 April 2025 19:42:46 +0000 (0:00:00.375) 0:06:34.338 ********* 2025-04-01 19:50:10.139736 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139743 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139749 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139755 | orchestrator | 2025-04-01 19:50:10.139762 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.139768 | orchestrator | Tuesday 01 April 2025 19:42:47 +0000 (0:00:00.346) 0:06:34.684 ********* 2025-04-01 19:50:10.139774 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139780 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139787 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139793 | orchestrator | 2025-04-01 19:50:10.139799 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.139806 | orchestrator | Tuesday 01 April 2025 19:42:47 +0000 (0:00:00.608) 0:06:35.292 ********* 2025-04-01 19:50:10.139812 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139818 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139824 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139831 | orchestrator | 2025-04-01 19:50:10.139837 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.139843 | orchestrator | Tuesday 01 April 2025 19:42:48 +0000 (0:00:00.380) 0:06:35.673 ********* 2025-04-01 19:50:10.139850 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139856 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139862 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139868 | orchestrator | 2025-04-01 19:50:10.139875 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.139881 | orchestrator | Tuesday 01 April 2025 19:42:48 +0000 (0:00:00.355) 0:06:36.028 ********* 2025-04-01 19:50:10.139887 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.139893 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.139900 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.139906 | orchestrator | 2025-04-01 19:50:10.139912 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.139922 | orchestrator | Tuesday 01 April 2025 19:42:48 +0000 (0:00:00.362) 0:06:36.390 ********* 2025-04-01 19:50:10.139928 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139949 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139956 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.139962 | orchestrator | 2025-04-01 19:50:10.139969 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.139975 | orchestrator | Tuesday 01 April 2025 19:42:49 +0000 (0:00:00.772) 0:06:37.163 ********* 2025-04-01 19:50:10.139981 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.139991 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.139998 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.140007 | orchestrator | 2025-04-01 19:50:10.140014 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.140020 | orchestrator | Tuesday 01 April 2025 19:42:49 +0000 (0:00:00.446) 0:06:37.609 ********* 2025-04-01 19:50:10.140026 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140033 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140039 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140045 | orchestrator | 2025-04-01 19:50:10.140052 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.140058 | orchestrator | Tuesday 01 April 2025 19:42:50 +0000 (0:00:00.411) 0:06:38.021 ********* 2025-04-01 19:50:10.140064 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140071 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140077 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140083 | orchestrator | 2025-04-01 19:50:10.140089 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.140096 | orchestrator | Tuesday 01 April 2025 19:42:50 +0000 (0:00:00.412) 0:06:38.433 ********* 2025-04-01 19:50:10.140102 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140108 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140114 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140121 | orchestrator | 2025-04-01 19:50:10.140127 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.140133 | orchestrator | Tuesday 01 April 2025 19:42:51 +0000 (0:00:00.734) 0:06:39.168 ********* 2025-04-01 19:50:10.140139 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140146 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140152 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140158 | orchestrator | 2025-04-01 19:50:10.140165 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.140171 | orchestrator | Tuesday 01 April 2025 19:42:51 +0000 (0:00:00.400) 0:06:39.568 ********* 2025-04-01 19:50:10.140177 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140183 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140189 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140195 | orchestrator | 2025-04-01 19:50:10.140202 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.140208 | orchestrator | Tuesday 01 April 2025 19:42:52 +0000 (0:00:00.417) 0:06:39.986 ********* 2025-04-01 19:50:10.140214 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140221 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140227 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140233 | orchestrator | 2025-04-01 19:50:10.140239 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.140245 | orchestrator | Tuesday 01 April 2025 19:42:52 +0000 (0:00:00.350) 0:06:40.336 ********* 2025-04-01 19:50:10.140252 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140258 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140264 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140270 | orchestrator | 2025-04-01 19:50:10.140277 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.140283 | orchestrator | Tuesday 01 April 2025 19:42:53 +0000 (0:00:00.724) 0:06:41.061 ********* 2025-04-01 19:50:10.140289 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140296 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140302 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140308 | orchestrator | 2025-04-01 19:50:10.140314 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.140321 | orchestrator | Tuesday 01 April 2025 19:42:53 +0000 (0:00:00.439) 0:06:41.501 ********* 2025-04-01 19:50:10.140327 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140338 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140344 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140350 | orchestrator | 2025-04-01 19:50:10.140356 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.140363 | orchestrator | Tuesday 01 April 2025 19:42:54 +0000 (0:00:00.418) 0:06:41.919 ********* 2025-04-01 19:50:10.140369 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140376 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140382 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140388 | orchestrator | 2025-04-01 19:50:10.140394 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.140401 | orchestrator | Tuesday 01 April 2025 19:42:54 +0000 (0:00:00.409) 0:06:42.329 ********* 2025-04-01 19:50:10.140407 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140413 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140420 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140426 | orchestrator | 2025-04-01 19:50:10.140432 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.140438 | orchestrator | Tuesday 01 April 2025 19:42:55 +0000 (0:00:00.828) 0:06:43.158 ********* 2025-04-01 19:50:10.140445 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140451 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140457 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140464 | orchestrator | 2025-04-01 19:50:10.140470 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.140476 | orchestrator | Tuesday 01 April 2025 19:42:56 +0000 (0:00:00.454) 0:06:43.612 ********* 2025-04-01 19:50:10.140483 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.140489 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.140495 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140502 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.140522 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.140529 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140535 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.140542 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.140548 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140554 | orchestrator | 2025-04-01 19:50:10.140560 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.140567 | orchestrator | Tuesday 01 April 2025 19:42:56 +0000 (0:00:00.429) 0:06:44.041 ********* 2025-04-01 19:50:10.140573 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-01 19:50:10.140579 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-01 19:50:10.140586 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140592 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-01 19:50:10.140598 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-01 19:50:10.140605 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140611 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-01 19:50:10.140617 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-01 19:50:10.140656 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140663 | orchestrator | 2025-04-01 19:50:10.140670 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.140676 | orchestrator | Tuesday 01 April 2025 19:42:56 +0000 (0:00:00.456) 0:06:44.498 ********* 2025-04-01 19:50:10.140682 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140689 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140695 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140701 | orchestrator | 2025-04-01 19:50:10.140708 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.140719 | orchestrator | Tuesday 01 April 2025 19:42:57 +0000 (0:00:00.696) 0:06:45.195 ********* 2025-04-01 19:50:10.140725 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140731 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140737 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140744 | orchestrator | 2025-04-01 19:50:10.140750 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.140756 | orchestrator | Tuesday 01 April 2025 19:42:57 +0000 (0:00:00.388) 0:06:45.583 ********* 2025-04-01 19:50:10.140762 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140769 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140775 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140784 | orchestrator | 2025-04-01 19:50:10.140793 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.140800 | orchestrator | Tuesday 01 April 2025 19:42:58 +0000 (0:00:00.360) 0:06:45.944 ********* 2025-04-01 19:50:10.140806 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140812 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140818 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140825 | orchestrator | 2025-04-01 19:50:10.140831 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.140837 | orchestrator | Tuesday 01 April 2025 19:42:59 +0000 (0:00:00.754) 0:06:46.698 ********* 2025-04-01 19:50:10.140843 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140850 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140856 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140862 | orchestrator | 2025-04-01 19:50:10.140868 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.140875 | orchestrator | Tuesday 01 April 2025 19:42:59 +0000 (0:00:00.388) 0:06:47.087 ********* 2025-04-01 19:50:10.140881 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140887 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.140893 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.140899 | orchestrator | 2025-04-01 19:50:10.140906 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.140912 | orchestrator | Tuesday 01 April 2025 19:42:59 +0000 (0:00:00.384) 0:06:47.472 ********* 2025-04-01 19:50:10.140918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.140924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.140931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.140937 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140943 | orchestrator | 2025-04-01 19:50:10.140949 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.140955 | orchestrator | Tuesday 01 April 2025 19:43:00 +0000 (0:00:00.477) 0:06:47.949 ********* 2025-04-01 19:50:10.140962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.140968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.140974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.140980 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.140987 | orchestrator | 2025-04-01 19:50:10.140993 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.140999 | orchestrator | Tuesday 01 April 2025 19:43:00 +0000 (0:00:00.435) 0:06:48.385 ********* 2025-04-01 19:50:10.141006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.141012 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.141018 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.141024 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141030 | orchestrator | 2025-04-01 19:50:10.141037 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.141047 | orchestrator | Tuesday 01 April 2025 19:43:01 +0000 (0:00:00.489) 0:06:48.874 ********* 2025-04-01 19:50:10.141053 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141059 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141065 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141072 | orchestrator | 2025-04-01 19:50:10.141099 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.141107 | orchestrator | Tuesday 01 April 2025 19:43:01 +0000 (0:00:00.629) 0:06:49.504 ********* 2025-04-01 19:50:10.141113 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.141120 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141126 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.141132 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141139 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.141145 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141151 | orchestrator | 2025-04-01 19:50:10.141157 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.141164 | orchestrator | Tuesday 01 April 2025 19:43:02 +0000 (0:00:00.595) 0:06:50.099 ********* 2025-04-01 19:50:10.141170 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141176 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141183 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141189 | orchestrator | 2025-04-01 19:50:10.141195 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.141201 | orchestrator | Tuesday 01 April 2025 19:43:02 +0000 (0:00:00.377) 0:06:50.477 ********* 2025-04-01 19:50:10.141206 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141212 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141218 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141224 | orchestrator | 2025-04-01 19:50:10.141230 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.141236 | orchestrator | Tuesday 01 April 2025 19:43:03 +0000 (0:00:00.360) 0:06:50.837 ********* 2025-04-01 19:50:10.141242 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.141248 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141254 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.141260 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141266 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.141272 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141278 | orchestrator | 2025-04-01 19:50:10.141284 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.141290 | orchestrator | Tuesday 01 April 2025 19:43:04 +0000 (0:00:00.896) 0:06:51.734 ********* 2025-04-01 19:50:10.141296 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141301 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141307 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141313 | orchestrator | 2025-04-01 19:50:10.141319 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.141325 | orchestrator | Tuesday 01 April 2025 19:43:04 +0000 (0:00:00.427) 0:06:52.161 ********* 2025-04-01 19:50:10.141331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.141337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.141343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.141349 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-01 19:50:10.141355 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-01 19:50:10.141361 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141366 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-01 19:50:10.141372 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141379 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-01 19:50:10.141388 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-01 19:50:10.141394 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-01 19:50:10.141400 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141406 | orchestrator | 2025-04-01 19:50:10.141412 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.141418 | orchestrator | Tuesday 01 April 2025 19:43:05 +0000 (0:00:00.722) 0:06:52.884 ********* 2025-04-01 19:50:10.141424 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141430 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141436 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141442 | orchestrator | 2025-04-01 19:50:10.141448 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.141454 | orchestrator | Tuesday 01 April 2025 19:43:06 +0000 (0:00:00.930) 0:06:53.814 ********* 2025-04-01 19:50:10.141460 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141466 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141472 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141477 | orchestrator | 2025-04-01 19:50:10.141483 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.141489 | orchestrator | Tuesday 01 April 2025 19:43:06 +0000 (0:00:00.619) 0:06:54.433 ********* 2025-04-01 19:50:10.141495 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141501 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141507 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141513 | orchestrator | 2025-04-01 19:50:10.141522 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.141528 | orchestrator | Tuesday 01 April 2025 19:43:07 +0000 (0:00:00.927) 0:06:55.360 ********* 2025-04-01 19:50:10.141534 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141540 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141546 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141552 | orchestrator | 2025-04-01 19:50:10.141558 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-04-01 19:50:10.141564 | orchestrator | Tuesday 01 April 2025 19:43:08 +0000 (0:00:00.599) 0:06:55.960 ********* 2025-04-01 19:50:10.141570 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:10.141576 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:50:10.141582 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:50:10.141588 | orchestrator | 2025-04-01 19:50:10.141607 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-04-01 19:50:10.141614 | orchestrator | Tuesday 01 April 2025 19:43:09 +0000 (0:00:01.314) 0:06:57.274 ********* 2025-04-01 19:50:10.141620 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.141638 | orchestrator | 2025-04-01 19:50:10.141644 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-04-01 19:50:10.141650 | orchestrator | Tuesday 01 April 2025 19:43:10 +0000 (0:00:00.620) 0:06:57.895 ********* 2025-04-01 19:50:10.141656 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.141662 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.141668 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.141674 | orchestrator | 2025-04-01 19:50:10.141680 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-04-01 19:50:10.141686 | orchestrator | Tuesday 01 April 2025 19:43:10 +0000 (0:00:00.707) 0:06:58.602 ********* 2025-04-01 19:50:10.141692 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141698 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141704 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141710 | orchestrator | 2025-04-01 19:50:10.141716 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-04-01 19:50:10.141722 | orchestrator | Tuesday 01 April 2025 19:43:11 +0000 (0:00:00.649) 0:06:59.251 ********* 2025-04-01 19:50:10.141732 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 19:50:10.141738 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 19:50:10.141744 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 19:50:10.141750 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-04-01 19:50:10.141756 | orchestrator | 2025-04-01 19:50:10.141762 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-04-01 19:50:10.141768 | orchestrator | Tuesday 01 April 2025 19:43:18 +0000 (0:00:06.549) 0:07:05.801 ********* 2025-04-01 19:50:10.141774 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.141784 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.141790 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.141796 | orchestrator | 2025-04-01 19:50:10.141802 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-04-01 19:50:10.141808 | orchestrator | Tuesday 01 April 2025 19:43:18 +0000 (0:00:00.404) 0:07:06.206 ********* 2025-04-01 19:50:10.141814 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-01 19:50:10.141820 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-01 19:50:10.141826 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-01 19:50:10.141832 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-01 19:50:10.141841 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:50:10.141847 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:50:10.141853 | orchestrator | 2025-04-01 19:50:10.141859 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-04-01 19:50:10.141865 | orchestrator | Tuesday 01 April 2025 19:43:21 +0000 (0:00:02.410) 0:07:08.616 ********* 2025-04-01 19:50:10.141871 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-01 19:50:10.141877 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-01 19:50:10.141883 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-01 19:50:10.141889 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 19:50:10.141895 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-01 19:50:10.141901 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-01 19:50:10.141907 | orchestrator | 2025-04-01 19:50:10.141913 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-04-01 19:50:10.141919 | orchestrator | Tuesday 01 April 2025 19:43:22 +0000 (0:00:01.400) 0:07:10.016 ********* 2025-04-01 19:50:10.141925 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.141931 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.141937 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.141943 | orchestrator | 2025-04-01 19:50:10.141949 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-04-01 19:50:10.141955 | orchestrator | Tuesday 01 April 2025 19:43:23 +0000 (0:00:00.747) 0:07:10.764 ********* 2025-04-01 19:50:10.141961 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.141967 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.141972 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.141978 | orchestrator | 2025-04-01 19:50:10.141984 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-04-01 19:50:10.141990 | orchestrator | Tuesday 01 April 2025 19:43:23 +0000 (0:00:00.620) 0:07:11.384 ********* 2025-04-01 19:50:10.141996 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.142002 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.142008 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.142027 | orchestrator | 2025-04-01 19:50:10.142034 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-04-01 19:50:10.142040 | orchestrator | Tuesday 01 April 2025 19:43:24 +0000 (0:00:00.370) 0:07:11.755 ********* 2025-04-01 19:50:10.142049 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.142059 | orchestrator | 2025-04-01 19:50:10.142065 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-04-01 19:50:10.142071 | orchestrator | Tuesday 01 April 2025 19:43:24 +0000 (0:00:00.602) 0:07:12.357 ********* 2025-04-01 19:50:10.142077 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.142083 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.142089 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.142095 | orchestrator | 2025-04-01 19:50:10.142104 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-04-01 19:50:10.142110 | orchestrator | Tuesday 01 April 2025 19:43:25 +0000 (0:00:00.691) 0:07:13.049 ********* 2025-04-01 19:50:10.142115 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.142121 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.142127 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.142133 | orchestrator | 2025-04-01 19:50:10.142154 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-04-01 19:50:10.142161 | orchestrator | Tuesday 01 April 2025 19:43:25 +0000 (0:00:00.402) 0:07:13.451 ********* 2025-04-01 19:50:10.142167 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.142173 | orchestrator | 2025-04-01 19:50:10.142179 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-04-01 19:50:10.142185 | orchestrator | Tuesday 01 April 2025 19:43:26 +0000 (0:00:00.616) 0:07:14.068 ********* 2025-04-01 19:50:10.142191 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142197 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142203 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142209 | orchestrator | 2025-04-01 19:50:10.142215 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-04-01 19:50:10.142221 | orchestrator | Tuesday 01 April 2025 19:43:28 +0000 (0:00:01.686) 0:07:15.755 ********* 2025-04-01 19:50:10.142227 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142232 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142238 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142244 | orchestrator | 2025-04-01 19:50:10.142250 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-04-01 19:50:10.142256 | orchestrator | Tuesday 01 April 2025 19:43:29 +0000 (0:00:01.304) 0:07:17.059 ********* 2025-04-01 19:50:10.142262 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142268 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142274 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142280 | orchestrator | 2025-04-01 19:50:10.142286 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-04-01 19:50:10.142292 | orchestrator | Tuesday 01 April 2025 19:43:31 +0000 (0:00:01.845) 0:07:18.905 ********* 2025-04-01 19:50:10.142298 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142304 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142310 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142316 | orchestrator | 2025-04-01 19:50:10.142322 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-04-01 19:50:10.142328 | orchestrator | Tuesday 01 April 2025 19:43:33 +0000 (0:00:02.151) 0:07:21.057 ********* 2025-04-01 19:50:10.142334 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.142340 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.142346 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-04-01 19:50:10.142352 | orchestrator | 2025-04-01 19:50:10.142358 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-04-01 19:50:10.142364 | orchestrator | Tuesday 01 April 2025 19:43:34 +0000 (0:00:00.668) 0:07:21.726 ********* 2025-04-01 19:50:10.142370 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-04-01 19:50:10.142376 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-04-01 19:50:10.142386 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.142393 | orchestrator | 2025-04-01 19:50:10.142399 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-04-01 19:50:10.142405 | orchestrator | Tuesday 01 April 2025 19:43:47 +0000 (0:00:13.243) 0:07:34.969 ********* 2025-04-01 19:50:10.142411 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.142417 | orchestrator | 2025-04-01 19:50:10.142423 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-04-01 19:50:10.142429 | orchestrator | Tuesday 01 April 2025 19:43:49 +0000 (0:00:01.709) 0:07:36.679 ********* 2025-04-01 19:50:10.142435 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.142441 | orchestrator | 2025-04-01 19:50:10.142447 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-04-01 19:50:10.142453 | orchestrator | Tuesday 01 April 2025 19:43:49 +0000 (0:00:00.466) 0:07:37.145 ********* 2025-04-01 19:50:10.142459 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.142465 | orchestrator | 2025-04-01 19:50:10.142471 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-04-01 19:50:10.142477 | orchestrator | Tuesday 01 April 2025 19:43:49 +0000 (0:00:00.333) 0:07:37.479 ********* 2025-04-01 19:50:10.142483 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-04-01 19:50:10.142489 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-04-01 19:50:10.142495 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-04-01 19:50:10.142501 | orchestrator | 2025-04-01 19:50:10.142507 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-04-01 19:50:10.142513 | orchestrator | Tuesday 01 April 2025 19:43:56 +0000 (0:00:06.336) 0:07:43.816 ********* 2025-04-01 19:50:10.142519 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-04-01 19:50:10.142525 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-04-01 19:50:10.142534 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-04-01 19:50:10.142540 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-04-01 19:50:10.142546 | orchestrator | 2025-04-01 19:50:10.142552 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-01 19:50:10.142558 | orchestrator | Tuesday 01 April 2025 19:44:01 +0000 (0:00:04.949) 0:07:48.765 ********* 2025-04-01 19:50:10.142564 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142570 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142576 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142582 | orchestrator | 2025-04-01 19:50:10.142588 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-01 19:50:10.142594 | orchestrator | Tuesday 01 April 2025 19:44:02 +0000 (0:00:00.997) 0:07:49.763 ********* 2025-04-01 19:50:10.142614 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:10.142621 | orchestrator | 2025-04-01 19:50:10.142637 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-01 19:50:10.142643 | orchestrator | Tuesday 01 April 2025 19:44:03 +0000 (0:00:00.916) 0:07:50.680 ********* 2025-04-01 19:50:10.142649 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.142655 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.142661 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.142667 | orchestrator | 2025-04-01 19:50:10.142673 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-01 19:50:10.142679 | orchestrator | Tuesday 01 April 2025 19:44:03 +0000 (0:00:00.414) 0:07:51.095 ********* 2025-04-01 19:50:10.142685 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142691 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142697 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142707 | orchestrator | 2025-04-01 19:50:10.142713 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-01 19:50:10.142719 | orchestrator | Tuesday 01 April 2025 19:44:04 +0000 (0:00:01.310) 0:07:52.405 ********* 2025-04-01 19:50:10.142725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:50:10.142731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:50:10.142737 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:50:10.142743 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.142749 | orchestrator | 2025-04-01 19:50:10.142755 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-01 19:50:10.142761 | orchestrator | Tuesday 01 April 2025 19:44:06 +0000 (0:00:01.241) 0:07:53.647 ********* 2025-04-01 19:50:10.142767 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.142773 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.142779 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.142785 | orchestrator | 2025-04-01 19:50:10.142791 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.142797 | orchestrator | Tuesday 01 April 2025 19:44:06 +0000 (0:00:00.705) 0:07:54.352 ********* 2025-04-01 19:50:10.142803 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.142809 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.142815 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.142824 | orchestrator | 2025-04-01 19:50:10.142830 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-04-01 19:50:10.142836 | orchestrator | 2025-04-01 19:50:10.142842 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.142848 | orchestrator | Tuesday 01 April 2025 19:44:09 +0000 (0:00:02.293) 0:07:56.646 ********* 2025-04-01 19:50:10.142854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.142860 | orchestrator | 2025-04-01 19:50:10.142866 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.142872 | orchestrator | Tuesday 01 April 2025 19:44:09 +0000 (0:00:00.862) 0:07:57.508 ********* 2025-04-01 19:50:10.142878 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.142884 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.142890 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.142896 | orchestrator | 2025-04-01 19:50:10.142902 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.142908 | orchestrator | Tuesday 01 April 2025 19:44:10 +0000 (0:00:00.362) 0:07:57.870 ********* 2025-04-01 19:50:10.142914 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.142920 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.142926 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.142932 | orchestrator | 2025-04-01 19:50:10.142938 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.142944 | orchestrator | Tuesday 01 April 2025 19:44:11 +0000 (0:00:00.866) 0:07:58.737 ********* 2025-04-01 19:50:10.142950 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.142956 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.142962 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.142968 | orchestrator | 2025-04-01 19:50:10.142974 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.142980 | orchestrator | Tuesday 01 April 2025 19:44:12 +0000 (0:00:01.132) 0:07:59.870 ********* 2025-04-01 19:50:10.142986 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.142992 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.142997 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.143003 | orchestrator | 2025-04-01 19:50:10.143010 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.143016 | orchestrator | Tuesday 01 April 2025 19:44:13 +0000 (0:00:00.967) 0:08:00.838 ********* 2025-04-01 19:50:10.143021 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143031 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143037 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143043 | orchestrator | 2025-04-01 19:50:10.143049 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.143055 | orchestrator | Tuesday 01 April 2025 19:44:13 +0000 (0:00:00.393) 0:08:01.231 ********* 2025-04-01 19:50:10.143061 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143067 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143073 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143079 | orchestrator | 2025-04-01 19:50:10.143085 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.143091 | orchestrator | Tuesday 01 April 2025 19:44:14 +0000 (0:00:00.531) 0:08:01.763 ********* 2025-04-01 19:50:10.143097 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143103 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143109 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143115 | orchestrator | 2025-04-01 19:50:10.143124 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.143130 | orchestrator | Tuesday 01 April 2025 19:44:14 +0000 (0:00:00.833) 0:08:02.596 ********* 2025-04-01 19:50:10.143136 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143142 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143162 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143169 | orchestrator | 2025-04-01 19:50:10.143175 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.143181 | orchestrator | Tuesday 01 April 2025 19:44:15 +0000 (0:00:00.456) 0:08:03.053 ********* 2025-04-01 19:50:10.143187 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143193 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143199 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143205 | orchestrator | 2025-04-01 19:50:10.143211 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.143217 | orchestrator | Tuesday 01 April 2025 19:44:15 +0000 (0:00:00.438) 0:08:03.491 ********* 2025-04-01 19:50:10.143223 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143229 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143234 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143241 | orchestrator | 2025-04-01 19:50:10.143246 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.143252 | orchestrator | Tuesday 01 April 2025 19:44:16 +0000 (0:00:00.387) 0:08:03.879 ********* 2025-04-01 19:50:10.143258 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.143265 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.143271 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.143276 | orchestrator | 2025-04-01 19:50:10.143283 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.143289 | orchestrator | Tuesday 01 April 2025 19:44:17 +0000 (0:00:01.158) 0:08:05.037 ********* 2025-04-01 19:50:10.143295 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143300 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143306 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143312 | orchestrator | 2025-04-01 19:50:10.143319 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.143325 | orchestrator | Tuesday 01 April 2025 19:44:17 +0000 (0:00:00.331) 0:08:05.369 ********* 2025-04-01 19:50:10.143330 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143336 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143342 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143348 | orchestrator | 2025-04-01 19:50:10.143354 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.143360 | orchestrator | Tuesday 01 April 2025 19:44:18 +0000 (0:00:00.342) 0:08:05.711 ********* 2025-04-01 19:50:10.143366 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.143372 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.143383 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.143389 | orchestrator | 2025-04-01 19:50:10.143395 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.143401 | orchestrator | Tuesday 01 April 2025 19:44:18 +0000 (0:00:00.330) 0:08:06.042 ********* 2025-04-01 19:50:10.143407 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.143413 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.143419 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.143425 | orchestrator | 2025-04-01 19:50:10.143431 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.143437 | orchestrator | Tuesday 01 April 2025 19:44:19 +0000 (0:00:00.645) 0:08:06.687 ********* 2025-04-01 19:50:10.143443 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.143449 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.143455 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.143461 | orchestrator | 2025-04-01 19:50:10.143467 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.143473 | orchestrator | Tuesday 01 April 2025 19:44:19 +0000 (0:00:00.379) 0:08:07.066 ********* 2025-04-01 19:50:10.143479 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143485 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143491 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143500 | orchestrator | 2025-04-01 19:50:10.143506 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.143512 | orchestrator | Tuesday 01 April 2025 19:44:19 +0000 (0:00:00.377) 0:08:07.444 ********* 2025-04-01 19:50:10.143525 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143531 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143537 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143543 | orchestrator | 2025-04-01 19:50:10.143549 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.143555 | orchestrator | Tuesday 01 April 2025 19:44:20 +0000 (0:00:00.340) 0:08:07.784 ********* 2025-04-01 19:50:10.143561 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143567 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143573 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143578 | orchestrator | 2025-04-01 19:50:10.143585 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.143591 | orchestrator | Tuesday 01 April 2025 19:44:20 +0000 (0:00:00.632) 0:08:08.417 ********* 2025-04-01 19:50:10.143596 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.143602 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.143609 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.143615 | orchestrator | 2025-04-01 19:50:10.143620 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.143655 | orchestrator | Tuesday 01 April 2025 19:44:21 +0000 (0:00:00.373) 0:08:08.791 ********* 2025-04-01 19:50:10.143662 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143668 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143674 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143679 | orchestrator | 2025-04-01 19:50:10.143685 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.143691 | orchestrator | Tuesday 01 April 2025 19:44:21 +0000 (0:00:00.329) 0:08:09.120 ********* 2025-04-01 19:50:10.143697 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143703 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143709 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143715 | orchestrator | 2025-04-01 19:50:10.143721 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.143727 | orchestrator | Tuesday 01 April 2025 19:44:21 +0000 (0:00:00.392) 0:08:09.513 ********* 2025-04-01 19:50:10.143733 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143739 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143745 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143751 | orchestrator | 2025-04-01 19:50:10.143779 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.143786 | orchestrator | Tuesday 01 April 2025 19:44:22 +0000 (0:00:00.653) 0:08:10.167 ********* 2025-04-01 19:50:10.143792 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143798 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143804 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143810 | orchestrator | 2025-04-01 19:50:10.143816 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.143822 | orchestrator | Tuesday 01 April 2025 19:44:22 +0000 (0:00:00.410) 0:08:10.577 ********* 2025-04-01 19:50:10.143828 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143834 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143840 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143846 | orchestrator | 2025-04-01 19:50:10.143852 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.143858 | orchestrator | Tuesday 01 April 2025 19:44:23 +0000 (0:00:00.362) 0:08:10.939 ********* 2025-04-01 19:50:10.143864 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143869 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143875 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143881 | orchestrator | 2025-04-01 19:50:10.143887 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.143893 | orchestrator | Tuesday 01 April 2025 19:44:23 +0000 (0:00:00.350) 0:08:11.290 ********* 2025-04-01 19:50:10.143899 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143905 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143911 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143917 | orchestrator | 2025-04-01 19:50:10.143923 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.143929 | orchestrator | Tuesday 01 April 2025 19:44:24 +0000 (0:00:00.667) 0:08:11.957 ********* 2025-04-01 19:50:10.143935 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143941 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143946 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143952 | orchestrator | 2025-04-01 19:50:10.143958 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.143964 | orchestrator | Tuesday 01 April 2025 19:44:24 +0000 (0:00:00.381) 0:08:12.339 ********* 2025-04-01 19:50:10.143970 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.143976 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.143982 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.143988 | orchestrator | 2025-04-01 19:50:10.143994 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.144000 | orchestrator | Tuesday 01 April 2025 19:44:25 +0000 (0:00:00.331) 0:08:12.670 ********* 2025-04-01 19:50:10.144006 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144012 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144018 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144024 | orchestrator | 2025-04-01 19:50:10.144030 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.144035 | orchestrator | Tuesday 01 April 2025 19:44:25 +0000 (0:00:00.341) 0:08:13.012 ********* 2025-04-01 19:50:10.144041 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144047 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144053 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144059 | orchestrator | 2025-04-01 19:50:10.144065 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.144071 | orchestrator | Tuesday 01 April 2025 19:44:26 +0000 (0:00:00.688) 0:08:13.701 ********* 2025-04-01 19:50:10.144077 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144083 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144089 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144098 | orchestrator | 2025-04-01 19:50:10.144104 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.144110 | orchestrator | Tuesday 01 April 2025 19:44:26 +0000 (0:00:00.430) 0:08:14.131 ********* 2025-04-01 19:50:10.144116 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.144122 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.144128 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144134 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.144140 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.144146 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144152 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.144157 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.144163 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144168 | orchestrator | 2025-04-01 19:50:10.144173 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.144179 | orchestrator | Tuesday 01 April 2025 19:44:26 +0000 (0:00:00.375) 0:08:14.506 ********* 2025-04-01 19:50:10.144184 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-01 19:50:10.144193 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-01 19:50:10.144198 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144204 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-01 19:50:10.144209 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-01 19:50:10.144214 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144220 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-01 19:50:10.144225 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-01 19:50:10.144231 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144239 | orchestrator | 2025-04-01 19:50:10.144244 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.144249 | orchestrator | Tuesday 01 April 2025 19:44:27 +0000 (0:00:00.434) 0:08:14.941 ********* 2025-04-01 19:50:10.144255 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144260 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144278 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144284 | orchestrator | 2025-04-01 19:50:10.144289 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.144295 | orchestrator | Tuesday 01 April 2025 19:44:28 +0000 (0:00:00.692) 0:08:15.633 ********* 2025-04-01 19:50:10.144300 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144305 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144311 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144316 | orchestrator | 2025-04-01 19:50:10.144322 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.144327 | orchestrator | Tuesday 01 April 2025 19:44:28 +0000 (0:00:00.393) 0:08:16.027 ********* 2025-04-01 19:50:10.144333 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144338 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144344 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144349 | orchestrator | 2025-04-01 19:50:10.144354 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.144360 | orchestrator | Tuesday 01 April 2025 19:44:28 +0000 (0:00:00.348) 0:08:16.376 ********* 2025-04-01 19:50:10.144365 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144370 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144376 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144381 | orchestrator | 2025-04-01 19:50:10.144387 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.144392 | orchestrator | Tuesday 01 April 2025 19:44:29 +0000 (0:00:00.380) 0:08:16.757 ********* 2025-04-01 19:50:10.144401 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144407 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144412 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144417 | orchestrator | 2025-04-01 19:50:10.144423 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.144428 | orchestrator | Tuesday 01 April 2025 19:44:29 +0000 (0:00:00.752) 0:08:17.510 ********* 2025-04-01 19:50:10.144434 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144439 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144444 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144450 | orchestrator | 2025-04-01 19:50:10.144455 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.144460 | orchestrator | Tuesday 01 April 2025 19:44:30 +0000 (0:00:00.421) 0:08:17.931 ********* 2025-04-01 19:50:10.144466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.144471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.144477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.144482 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144488 | orchestrator | 2025-04-01 19:50:10.144493 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.144524 | orchestrator | Tuesday 01 April 2025 19:44:30 +0000 (0:00:00.515) 0:08:18.447 ********* 2025-04-01 19:50:10.144530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.144536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.144541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.144547 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144552 | orchestrator | 2025-04-01 19:50:10.144558 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.144563 | orchestrator | Tuesday 01 April 2025 19:44:31 +0000 (0:00:00.514) 0:08:18.962 ********* 2025-04-01 19:50:10.144568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.144574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.144579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.144584 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144590 | orchestrator | 2025-04-01 19:50:10.144595 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.144600 | orchestrator | Tuesday 01 April 2025 19:44:31 +0000 (0:00:00.487) 0:08:19.450 ********* 2025-04-01 19:50:10.144606 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144611 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144616 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144631 | orchestrator | 2025-04-01 19:50:10.144637 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.144642 | orchestrator | Tuesday 01 April 2025 19:44:32 +0000 (0:00:00.401) 0:08:19.852 ********* 2025-04-01 19:50:10.144648 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.144653 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144659 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.144664 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144669 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.144675 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144680 | orchestrator | 2025-04-01 19:50:10.144686 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.144691 | orchestrator | Tuesday 01 April 2025 19:44:33 +0000 (0:00:00.893) 0:08:20.745 ********* 2025-04-01 19:50:10.144696 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144702 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144707 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144712 | orchestrator | 2025-04-01 19:50:10.144718 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.144728 | orchestrator | Tuesday 01 April 2025 19:44:33 +0000 (0:00:00.361) 0:08:21.107 ********* 2025-04-01 19:50:10.144734 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144739 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144745 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144750 | orchestrator | 2025-04-01 19:50:10.144755 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.144761 | orchestrator | Tuesday 01 April 2025 19:44:33 +0000 (0:00:00.342) 0:08:21.450 ********* 2025-04-01 19:50:10.144780 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.144787 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144792 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.144797 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144803 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.144808 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144814 | orchestrator | 2025-04-01 19:50:10.144819 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.144825 | orchestrator | Tuesday 01 April 2025 19:44:34 +0000 (0:00:00.508) 0:08:21.958 ********* 2025-04-01 19:50:10.144830 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.144836 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144841 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.144847 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144852 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.144858 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144863 | orchestrator | 2025-04-01 19:50:10.144868 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.144874 | orchestrator | Tuesday 01 April 2025 19:44:35 +0000 (0:00:00.686) 0:08:22.645 ********* 2025-04-01 19:50:10.144879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.144885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.144890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.144896 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.144907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.144912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.144917 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.144928 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.144934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.144939 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144944 | orchestrator | 2025-04-01 19:50:10.144950 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.144955 | orchestrator | Tuesday 01 April 2025 19:44:35 +0000 (0:00:00.750) 0:08:23.395 ********* 2025-04-01 19:50:10.144961 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.144966 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.144972 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.144977 | orchestrator | 2025-04-01 19:50:10.144982 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.144988 | orchestrator | Tuesday 01 April 2025 19:44:36 +0000 (0:00:00.879) 0:08:24.274 ********* 2025-04-01 19:50:10.144993 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.145003 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145009 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.145014 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145019 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.145025 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145030 | orchestrator | 2025-04-01 19:50:10.145036 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.145041 | orchestrator | Tuesday 01 April 2025 19:44:37 +0000 (0:00:00.647) 0:08:24.921 ********* 2025-04-01 19:50:10.145046 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145052 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145057 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145063 | orchestrator | 2025-04-01 19:50:10.145068 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.145074 | orchestrator | Tuesday 01 April 2025 19:44:38 +0000 (0:00:00.853) 0:08:25.775 ********* 2025-04-01 19:50:10.145079 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145088 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145094 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145099 | orchestrator | 2025-04-01 19:50:10.145104 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-04-01 19:50:10.145110 | orchestrator | Tuesday 01 April 2025 19:44:38 +0000 (0:00:00.604) 0:08:26.379 ********* 2025-04-01 19:50:10.145115 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.145121 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.145126 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.145131 | orchestrator | 2025-04-01 19:50:10.145137 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-04-01 19:50:10.145142 | orchestrator | Tuesday 01 April 2025 19:44:39 +0000 (0:00:00.339) 0:08:26.718 ********* 2025-04-01 19:50:10.145148 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-01 19:50:10.145153 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:50:10.145165 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:50:10.145171 | orchestrator | 2025-04-01 19:50:10.145176 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-04-01 19:50:10.145182 | orchestrator | Tuesday 01 April 2025 19:44:40 +0000 (0:00:01.286) 0:08:28.005 ********* 2025-04-01 19:50:10.145187 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.145193 | orchestrator | 2025-04-01 19:50:10.145210 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-04-01 19:50:10.145217 | orchestrator | Tuesday 01 April 2025 19:44:41 +0000 (0:00:00.655) 0:08:28.661 ********* 2025-04-01 19:50:10.145222 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145227 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145233 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145238 | orchestrator | 2025-04-01 19:50:10.145244 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-04-01 19:50:10.145249 | orchestrator | Tuesday 01 April 2025 19:44:41 +0000 (0:00:00.342) 0:08:29.004 ********* 2025-04-01 19:50:10.145254 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145260 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145266 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145271 | orchestrator | 2025-04-01 19:50:10.145276 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-04-01 19:50:10.145282 | orchestrator | Tuesday 01 April 2025 19:44:42 +0000 (0:00:00.628) 0:08:29.632 ********* 2025-04-01 19:50:10.145287 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145293 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145298 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145308 | orchestrator | 2025-04-01 19:50:10.145313 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-04-01 19:50:10.145319 | orchestrator | Tuesday 01 April 2025 19:44:42 +0000 (0:00:00.431) 0:08:30.063 ********* 2025-04-01 19:50:10.145324 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145329 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145335 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145340 | orchestrator | 2025-04-01 19:50:10.145346 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-04-01 19:50:10.145351 | orchestrator | Tuesday 01 April 2025 19:44:42 +0000 (0:00:00.545) 0:08:30.608 ********* 2025-04-01 19:50:10.145357 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.145362 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.145368 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.145373 | orchestrator | 2025-04-01 19:50:10.145379 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-04-01 19:50:10.145384 | orchestrator | Tuesday 01 April 2025 19:44:44 +0000 (0:00:01.004) 0:08:31.613 ********* 2025-04-01 19:50:10.145389 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.145395 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.145400 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.145405 | orchestrator | 2025-04-01 19:50:10.145411 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-04-01 19:50:10.145416 | orchestrator | Tuesday 01 April 2025 19:44:44 +0000 (0:00:00.768) 0:08:32.381 ********* 2025-04-01 19:50:10.145422 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-01 19:50:10.145432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-01 19:50:10.145438 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-01 19:50:10.145443 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-01 19:50:10.145449 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-01 19:50:10.145454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-01 19:50:10.145459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-01 19:50:10.145465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-01 19:50:10.145470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-01 19:50:10.145476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-01 19:50:10.145481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-01 19:50:10.145486 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-01 19:50:10.145492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-01 19:50:10.145497 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-01 19:50:10.145503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-01 19:50:10.145508 | orchestrator | 2025-04-01 19:50:10.145513 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-04-01 19:50:10.145519 | orchestrator | Tuesday 01 April 2025 19:44:49 +0000 (0:00:04.551) 0:08:36.932 ********* 2025-04-01 19:50:10.145524 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145530 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145535 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145540 | orchestrator | 2025-04-01 19:50:10.145546 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-04-01 19:50:10.145551 | orchestrator | Tuesday 01 April 2025 19:44:49 +0000 (0:00:00.379) 0:08:37.312 ********* 2025-04-01 19:50:10.145560 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.145566 | orchestrator | 2025-04-01 19:50:10.145571 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-04-01 19:50:10.145576 | orchestrator | Tuesday 01 April 2025 19:44:50 +0000 (0:00:00.832) 0:08:38.145 ********* 2025-04-01 19:50:10.145582 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-01 19:50:10.145587 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-01 19:50:10.145593 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-01 19:50:10.145613 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-04-01 19:50:10.145619 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-04-01 19:50:10.145648 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-04-01 19:50:10.145654 | orchestrator | 2025-04-01 19:50:10.145659 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-04-01 19:50:10.145665 | orchestrator | Tuesday 01 April 2025 19:44:51 +0000 (0:00:01.156) 0:08:39.301 ********* 2025-04-01 19:50:10.145670 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:50:10.145676 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.145681 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-01 19:50:10.145686 | orchestrator | 2025-04-01 19:50:10.145692 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-04-01 19:50:10.145697 | orchestrator | Tuesday 01 April 2025 19:44:53 +0000 (0:00:01.746) 0:08:41.048 ********* 2025-04-01 19:50:10.145703 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-01 19:50:10.145708 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.145714 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.145722 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-01 19:50:10.145727 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.145733 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.145738 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-01 19:50:10.145744 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.145749 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.145755 | orchestrator | 2025-04-01 19:50:10.145760 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-04-01 19:50:10.145765 | orchestrator | Tuesday 01 April 2025 19:44:54 +0000 (0:00:01.520) 0:08:42.568 ********* 2025-04-01 19:50:10.145771 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.145776 | orchestrator | 2025-04-01 19:50:10.145782 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-04-01 19:50:10.145787 | orchestrator | Tuesday 01 April 2025 19:44:57 +0000 (0:00:02.430) 0:08:44.998 ********* 2025-04-01 19:50:10.145793 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.145798 | orchestrator | 2025-04-01 19:50:10.145804 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-04-01 19:50:10.145809 | orchestrator | Tuesday 01 April 2025 19:44:57 +0000 (0:00:00.559) 0:08:45.558 ********* 2025-04-01 19:50:10.145814 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145820 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145825 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145831 | orchestrator | 2025-04-01 19:50:10.145836 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-04-01 19:50:10.145842 | orchestrator | Tuesday 01 April 2025 19:44:58 +0000 (0:00:00.620) 0:08:46.179 ********* 2025-04-01 19:50:10.145847 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145852 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145862 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145867 | orchestrator | 2025-04-01 19:50:10.145873 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-04-01 19:50:10.145878 | orchestrator | Tuesday 01 April 2025 19:44:58 +0000 (0:00:00.349) 0:08:46.528 ********* 2025-04-01 19:50:10.145884 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.145889 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.145895 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.145900 | orchestrator | 2025-04-01 19:50:10.145905 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-04-01 19:50:10.145911 | orchestrator | Tuesday 01 April 2025 19:44:59 +0000 (0:00:00.390) 0:08:46.919 ********* 2025-04-01 19:50:10.145916 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.145922 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.145927 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.145933 | orchestrator | 2025-04-01 19:50:10.145938 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-04-01 19:50:10.145944 | orchestrator | Tuesday 01 April 2025 19:44:59 +0000 (0:00:00.378) 0:08:47.298 ********* 2025-04-01 19:50:10.145949 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.145955 | orchestrator | 2025-04-01 19:50:10.145960 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-04-01 19:50:10.145966 | orchestrator | Tuesday 01 April 2025 19:45:00 +0000 (0:00:01.007) 0:08:48.305 ********* 2025-04-01 19:50:10.145971 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-959a80fb-1de6-50df-b35c-a247ba0dd9c7', 'data_vg': 'ceph-959a80fb-1de6-50df-b35c-a247ba0dd9c7'}) 2025-04-01 19:50:10.145977 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-52229b2b-1fb5-50ba-ad18-deadbd92af76', 'data_vg': 'ceph-52229b2b-1fb5-50ba-ad18-deadbd92af76'}) 2025-04-01 19:50:10.145983 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-bdd573d7-384a-5f49-8a42-9b210b6d8834', 'data_vg': 'ceph-bdd573d7-384a-5f49-8a42-9b210b6d8834'}) 2025-04-01 19:50:10.145988 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050', 'data_vg': 'ceph-cc43dffc-fbc4-5f6e-b48c-5e4474ee7050'}) 2025-04-01 19:50:10.145994 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b9675d24-a7d4-5c32-a36a-48aa524d4563', 'data_vg': 'ceph-b9675d24-a7d4-5c32-a36a-48aa524d4563'}) 2025-04-01 19:50:10.146037 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-988d16a2-b35c-5840-9d7c-a8265d6d87f9', 'data_vg': 'ceph-988d16a2-b35c-5840-9d7c-a8265d6d87f9'}) 2025-04-01 19:50:10.146045 | orchestrator | 2025-04-01 19:50:10.146051 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-04-01 19:50:10.146056 | orchestrator | Tuesday 01 April 2025 19:45:30 +0000 (0:00:30.025) 0:09:18.331 ********* 2025-04-01 19:50:10.146062 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146067 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146073 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146078 | orchestrator | 2025-04-01 19:50:10.146083 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-04-01 19:50:10.146089 | orchestrator | Tuesday 01 April 2025 19:45:31 +0000 (0:00:00.525) 0:09:18.857 ********* 2025-04-01 19:50:10.146094 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.146100 | orchestrator | 2025-04-01 19:50:10.146105 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-04-01 19:50:10.146111 | orchestrator | Tuesday 01 April 2025 19:45:31 +0000 (0:00:00.594) 0:09:19.451 ********* 2025-04-01 19:50:10.146116 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.146121 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.146127 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.146136 | orchestrator | 2025-04-01 19:50:10.146142 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-04-01 19:50:10.146147 | orchestrator | Tuesday 01 April 2025 19:45:32 +0000 (0:00:00.649) 0:09:20.101 ********* 2025-04-01 19:50:10.146152 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.146158 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.146163 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.146172 | orchestrator | 2025-04-01 19:50:10.146177 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-04-01 19:50:10.146183 | orchestrator | Tuesday 01 April 2025 19:45:34 +0000 (0:00:01.760) 0:09:21.861 ********* 2025-04-01 19:50:10.146188 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.146194 | orchestrator | 2025-04-01 19:50:10.146199 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-04-01 19:50:10.146204 | orchestrator | Tuesday 01 April 2025 19:45:34 +0000 (0:00:00.604) 0:09:22.466 ********* 2025-04-01 19:50:10.146210 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.146215 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.146220 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.146225 | orchestrator | 2025-04-01 19:50:10.146230 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-04-01 19:50:10.146235 | orchestrator | Tuesday 01 April 2025 19:45:36 +0000 (0:00:01.691) 0:09:24.158 ********* 2025-04-01 19:50:10.146240 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.146245 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.146250 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.146255 | orchestrator | 2025-04-01 19:50:10.146260 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-04-01 19:50:10.146267 | orchestrator | Tuesday 01 April 2025 19:45:37 +0000 (0:00:01.328) 0:09:25.486 ********* 2025-04-01 19:50:10.146272 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.146278 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.146283 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.146287 | orchestrator | 2025-04-01 19:50:10.146292 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-04-01 19:50:10.146297 | orchestrator | Tuesday 01 April 2025 19:45:39 +0000 (0:00:01.759) 0:09:27.246 ********* 2025-04-01 19:50:10.146302 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146307 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146312 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146317 | orchestrator | 2025-04-01 19:50:10.146322 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-04-01 19:50:10.146327 | orchestrator | Tuesday 01 April 2025 19:45:40 +0000 (0:00:00.388) 0:09:27.634 ********* 2025-04-01 19:50:10.146332 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146337 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146342 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146346 | orchestrator | 2025-04-01 19:50:10.146351 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-04-01 19:50:10.146356 | orchestrator | Tuesday 01 April 2025 19:45:40 +0000 (0:00:00.640) 0:09:28.275 ********* 2025-04-01 19:50:10.146361 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-01 19:50:10.146366 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-04-01 19:50:10.146371 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-04-01 19:50:10.146376 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-04-01 19:50:10.146381 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-04-01 19:50:10.146386 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-04-01 19:50:10.146390 | orchestrator | 2025-04-01 19:50:10.146395 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-04-01 19:50:10.146400 | orchestrator | Tuesday 01 April 2025 19:45:41 +0000 (0:00:01.065) 0:09:29.340 ********* 2025-04-01 19:50:10.146405 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-04-01 19:50:10.146413 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-04-01 19:50:10.146418 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-04-01 19:50:10.146423 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-04-01 19:50:10.146428 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-04-01 19:50:10.146433 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-04-01 19:50:10.146438 | orchestrator | 2025-04-01 19:50:10.146443 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-04-01 19:50:10.146448 | orchestrator | Tuesday 01 April 2025 19:45:45 +0000 (0:00:03.378) 0:09:32.719 ********* 2025-04-01 19:50:10.146453 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146458 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146463 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.146468 | orchestrator | 2025-04-01 19:50:10.146485 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-04-01 19:50:10.146491 | orchestrator | Tuesday 01 April 2025 19:45:47 +0000 (0:00:02.396) 0:09:35.116 ********* 2025-04-01 19:50:10.146496 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146501 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146506 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-04-01 19:50:10.146511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.146516 | orchestrator | 2025-04-01 19:50:10.146521 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-04-01 19:50:10.146526 | orchestrator | Tuesday 01 April 2025 19:46:00 +0000 (0:00:12.629) 0:09:47.745 ********* 2025-04-01 19:50:10.146530 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146535 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146540 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146545 | orchestrator | 2025-04-01 19:50:10.146550 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-04-01 19:50:10.146555 | orchestrator | Tuesday 01 April 2025 19:46:00 +0000 (0:00:00.735) 0:09:48.481 ********* 2025-04-01 19:50:10.146560 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146565 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146570 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146575 | orchestrator | 2025-04-01 19:50:10.146580 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-01 19:50:10.146585 | orchestrator | Tuesday 01 April 2025 19:46:02 +0000 (0:00:01.315) 0:09:49.796 ********* 2025-04-01 19:50:10.146590 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.146594 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.146599 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.146604 | orchestrator | 2025-04-01 19:50:10.146609 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-01 19:50:10.146614 | orchestrator | Tuesday 01 April 2025 19:46:02 +0000 (0:00:00.743) 0:09:50.540 ********* 2025-04-01 19:50:10.146619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.146636 | orchestrator | 2025-04-01 19:50:10.146641 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-01 19:50:10.146646 | orchestrator | Tuesday 01 April 2025 19:46:03 +0000 (0:00:00.889) 0:09:51.429 ********* 2025-04-01 19:50:10.146651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.146656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.146661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.146666 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146671 | orchestrator | 2025-04-01 19:50:10.146676 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-01 19:50:10.146681 | orchestrator | Tuesday 01 April 2025 19:46:04 +0000 (0:00:00.429) 0:09:51.859 ********* 2025-04-01 19:50:10.146689 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146694 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146699 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146704 | orchestrator | 2025-04-01 19:50:10.146709 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-01 19:50:10.146714 | orchestrator | Tuesday 01 April 2025 19:46:04 +0000 (0:00:00.351) 0:09:52.210 ********* 2025-04-01 19:50:10.146719 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146724 | orchestrator | 2025-04-01 19:50:10.146729 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-01 19:50:10.146734 | orchestrator | Tuesday 01 April 2025 19:46:04 +0000 (0:00:00.264) 0:09:52.475 ********* 2025-04-01 19:50:10.146738 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146743 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146748 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146753 | orchestrator | 2025-04-01 19:50:10.146758 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-01 19:50:10.146763 | orchestrator | Tuesday 01 April 2025 19:46:05 +0000 (0:00:00.669) 0:09:53.145 ********* 2025-04-01 19:50:10.146768 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146773 | orchestrator | 2025-04-01 19:50:10.146778 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-01 19:50:10.146786 | orchestrator | Tuesday 01 April 2025 19:46:05 +0000 (0:00:00.285) 0:09:53.430 ********* 2025-04-01 19:50:10.146791 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146796 | orchestrator | 2025-04-01 19:50:10.146801 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-01 19:50:10.146806 | orchestrator | Tuesday 01 April 2025 19:46:06 +0000 (0:00:00.264) 0:09:53.695 ********* 2025-04-01 19:50:10.146811 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146816 | orchestrator | 2025-04-01 19:50:10.146821 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-01 19:50:10.146825 | orchestrator | Tuesday 01 April 2025 19:46:06 +0000 (0:00:00.140) 0:09:53.836 ********* 2025-04-01 19:50:10.146830 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146835 | orchestrator | 2025-04-01 19:50:10.146840 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-01 19:50:10.146845 | orchestrator | Tuesday 01 April 2025 19:46:06 +0000 (0:00:00.261) 0:09:54.097 ********* 2025-04-01 19:50:10.146850 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146855 | orchestrator | 2025-04-01 19:50:10.146860 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-01 19:50:10.146865 | orchestrator | Tuesday 01 April 2025 19:46:06 +0000 (0:00:00.260) 0:09:54.358 ********* 2025-04-01 19:50:10.146870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.146875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.146880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.146885 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146890 | orchestrator | 2025-04-01 19:50:10.146910 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-01 19:50:10.146916 | orchestrator | Tuesday 01 April 2025 19:46:07 +0000 (0:00:00.453) 0:09:54.811 ********* 2025-04-01 19:50:10.146921 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146927 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.146932 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.146940 | orchestrator | 2025-04-01 19:50:10.146945 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-01 19:50:10.146950 | orchestrator | Tuesday 01 April 2025 19:46:07 +0000 (0:00:00.348) 0:09:55.160 ********* 2025-04-01 19:50:10.146955 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146960 | orchestrator | 2025-04-01 19:50:10.146965 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-01 19:50:10.146970 | orchestrator | Tuesday 01 April 2025 19:46:08 +0000 (0:00:00.896) 0:09:56.056 ********* 2025-04-01 19:50:10.146979 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.146984 | orchestrator | 2025-04-01 19:50:10.146989 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.146993 | orchestrator | Tuesday 01 April 2025 19:46:08 +0000 (0:00:00.256) 0:09:56.313 ********* 2025-04-01 19:50:10.146998 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.147003 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.147008 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.147013 | orchestrator | 2025-04-01 19:50:10.147018 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-04-01 19:50:10.147023 | orchestrator | 2025-04-01 19:50:10.147028 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.147033 | orchestrator | Tuesday 01 April 2025 19:46:11 +0000 (0:00:03.051) 0:09:59.364 ********* 2025-04-01 19:50:10.147038 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.147043 | orchestrator | 2025-04-01 19:50:10.147048 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.147053 | orchestrator | Tuesday 01 April 2025 19:46:13 +0000 (0:00:01.501) 0:10:00.866 ********* 2025-04-01 19:50:10.147058 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147063 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.147068 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147073 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147078 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.147083 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.147088 | orchestrator | 2025-04-01 19:50:10.147093 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.147098 | orchestrator | Tuesday 01 April 2025 19:46:14 +0000 (0:00:00.852) 0:10:01.718 ********* 2025-04-01 19:50:10.147103 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147108 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147113 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147118 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147123 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147128 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147133 | orchestrator | 2025-04-01 19:50:10.147138 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.147142 | orchestrator | Tuesday 01 April 2025 19:46:15 +0000 (0:00:01.389) 0:10:03.108 ********* 2025-04-01 19:50:10.147147 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147153 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147157 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147162 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147167 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147172 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147177 | orchestrator | 2025-04-01 19:50:10.147182 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.147187 | orchestrator | Tuesday 01 April 2025 19:46:16 +0000 (0:00:01.160) 0:10:04.268 ********* 2025-04-01 19:50:10.147192 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147197 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147202 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147207 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147212 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147216 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147221 | orchestrator | 2025-04-01 19:50:10.147226 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.147231 | orchestrator | Tuesday 01 April 2025 19:46:18 +0000 (0:00:01.508) 0:10:05.776 ********* 2025-04-01 19:50:10.147236 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.147241 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147249 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147254 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.147259 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147264 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.147269 | orchestrator | 2025-04-01 19:50:10.147274 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.147282 | orchestrator | Tuesday 01 April 2025 19:46:19 +0000 (0:00:01.037) 0:10:06.814 ********* 2025-04-01 19:50:10.147287 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147292 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147297 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147302 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147307 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147312 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147317 | orchestrator | 2025-04-01 19:50:10.147322 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.147327 | orchestrator | Tuesday 01 April 2025 19:46:19 +0000 (0:00:00.751) 0:10:07.566 ********* 2025-04-01 19:50:10.147332 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147337 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147342 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147347 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147352 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147357 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147362 | orchestrator | 2025-04-01 19:50:10.147367 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.147384 | orchestrator | Tuesday 01 April 2025 19:46:20 +0000 (0:00:00.954) 0:10:08.521 ********* 2025-04-01 19:50:10.147389 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147394 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147399 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147404 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147414 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147420 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147425 | orchestrator | 2025-04-01 19:50:10.147430 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.147435 | orchestrator | Tuesday 01 April 2025 19:46:21 +0000 (0:00:00.724) 0:10:09.245 ********* 2025-04-01 19:50:10.147440 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147444 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147449 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147454 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147459 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147464 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147469 | orchestrator | 2025-04-01 19:50:10.147474 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.147479 | orchestrator | Tuesday 01 April 2025 19:46:22 +0000 (0:00:00.743) 0:10:09.989 ********* 2025-04-01 19:50:10.147484 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147489 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147494 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147499 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147504 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147509 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147514 | orchestrator | 2025-04-01 19:50:10.147519 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.147524 | orchestrator | Tuesday 01 April 2025 19:46:23 +0000 (0:00:01.073) 0:10:11.063 ********* 2025-04-01 19:50:10.147529 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.147534 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.147538 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.147543 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147548 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147553 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147561 | orchestrator | 2025-04-01 19:50:10.147566 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.147571 | orchestrator | Tuesday 01 April 2025 19:46:25 +0000 (0:00:01.654) 0:10:12.717 ********* 2025-04-01 19:50:10.147576 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147581 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147586 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147591 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147596 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147601 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147606 | orchestrator | 2025-04-01 19:50:10.147611 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.147616 | orchestrator | Tuesday 01 April 2025 19:46:25 +0000 (0:00:00.730) 0:10:13.448 ********* 2025-04-01 19:50:10.147630 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.147635 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.147640 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.147645 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147650 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147655 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147660 | orchestrator | 2025-04-01 19:50:10.147665 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.147670 | orchestrator | Tuesday 01 April 2025 19:46:26 +0000 (0:00:00.950) 0:10:14.398 ********* 2025-04-01 19:50:10.147675 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147680 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147685 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147690 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147695 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147700 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147705 | orchestrator | 2025-04-01 19:50:10.147710 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.147715 | orchestrator | Tuesday 01 April 2025 19:46:27 +0000 (0:00:00.808) 0:10:15.206 ********* 2025-04-01 19:50:10.147720 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147725 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147729 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147734 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147739 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147744 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147749 | orchestrator | 2025-04-01 19:50:10.147754 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.147759 | orchestrator | Tuesday 01 April 2025 19:46:28 +0000 (0:00:01.072) 0:10:16.279 ********* 2025-04-01 19:50:10.147764 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147769 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147774 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147779 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147784 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147789 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147794 | orchestrator | 2025-04-01 19:50:10.147799 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.147804 | orchestrator | Tuesday 01 April 2025 19:46:29 +0000 (0:00:00.739) 0:10:17.018 ********* 2025-04-01 19:50:10.147809 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147814 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147822 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147827 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147832 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147838 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147842 | orchestrator | 2025-04-01 19:50:10.147847 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.147852 | orchestrator | Tuesday 01 April 2025 19:46:30 +0000 (0:00:00.977) 0:10:17.996 ********* 2025-04-01 19:50:10.147862 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.147867 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.147872 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.147877 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147882 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147899 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147905 | orchestrator | 2025-04-01 19:50:10.147910 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.147915 | orchestrator | Tuesday 01 April 2025 19:46:31 +0000 (0:00:00.733) 0:10:18.729 ********* 2025-04-01 19:50:10.147920 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.147925 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.147929 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.147934 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.147939 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.147947 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.147952 | orchestrator | 2025-04-01 19:50:10.147957 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.147962 | orchestrator | Tuesday 01 April 2025 19:46:32 +0000 (0:00:01.066) 0:10:19.796 ********* 2025-04-01 19:50:10.147967 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.147972 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.147977 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.147982 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.147986 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.147991 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.147996 | orchestrator | 2025-04-01 19:50:10.148001 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.148006 | orchestrator | Tuesday 01 April 2025 19:46:32 +0000 (0:00:00.751) 0:10:20.547 ********* 2025-04-01 19:50:10.148011 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148016 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148021 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148026 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148031 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148036 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148041 | orchestrator | 2025-04-01 19:50:10.148045 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.148053 | orchestrator | Tuesday 01 April 2025 19:46:33 +0000 (0:00:00.983) 0:10:21.530 ********* 2025-04-01 19:50:10.148058 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148063 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148068 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148073 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148078 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148083 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148087 | orchestrator | 2025-04-01 19:50:10.148092 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.148097 | orchestrator | Tuesday 01 April 2025 19:46:34 +0000 (0:00:00.759) 0:10:22.290 ********* 2025-04-01 19:50:10.148102 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148107 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148112 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148117 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148122 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148127 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148132 | orchestrator | 2025-04-01 19:50:10.148137 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.148142 | orchestrator | Tuesday 01 April 2025 19:46:35 +0000 (0:00:00.986) 0:10:23.276 ********* 2025-04-01 19:50:10.148146 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148151 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148156 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148165 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148170 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148175 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148180 | orchestrator | 2025-04-01 19:50:10.148184 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.148189 | orchestrator | Tuesday 01 April 2025 19:46:36 +0000 (0:00:00.758) 0:10:24.035 ********* 2025-04-01 19:50:10.148194 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148199 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148204 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148209 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148217 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148222 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148227 | orchestrator | 2025-04-01 19:50:10.148232 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.148237 | orchestrator | Tuesday 01 April 2025 19:46:37 +0000 (0:00:00.958) 0:10:24.993 ********* 2025-04-01 19:50:10.148242 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148247 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148252 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148256 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148261 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148266 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148271 | orchestrator | 2025-04-01 19:50:10.148276 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.148281 | orchestrator | Tuesday 01 April 2025 19:46:38 +0000 (0:00:00.702) 0:10:25.696 ********* 2025-04-01 19:50:10.148286 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148291 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148296 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148301 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148306 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148311 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148315 | orchestrator | 2025-04-01 19:50:10.148320 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.148325 | orchestrator | Tuesday 01 April 2025 19:46:39 +0000 (0:00:00.968) 0:10:26.665 ********* 2025-04-01 19:50:10.148330 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148335 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148340 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148345 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148350 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148355 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148360 | orchestrator | 2025-04-01 19:50:10.148365 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.148370 | orchestrator | Tuesday 01 April 2025 19:46:39 +0000 (0:00:00.757) 0:10:27.423 ********* 2025-04-01 19:50:10.148387 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148392 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148397 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148402 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148407 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148412 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148417 | orchestrator | 2025-04-01 19:50:10.148422 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.148427 | orchestrator | Tuesday 01 April 2025 19:46:40 +0000 (0:00:00.975) 0:10:28.399 ********* 2025-04-01 19:50:10.148432 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148437 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148442 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148447 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148452 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148460 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148465 | orchestrator | 2025-04-01 19:50:10.148470 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.148475 | orchestrator | Tuesday 01 April 2025 19:46:41 +0000 (0:00:00.724) 0:10:29.123 ********* 2025-04-01 19:50:10.148480 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148485 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148490 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148495 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148500 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148505 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148510 | orchestrator | 2025-04-01 19:50:10.148515 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.148520 | orchestrator | Tuesday 01 April 2025 19:46:42 +0000 (0:00:00.976) 0:10:30.100 ********* 2025-04-01 19:50:10.148525 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148530 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148535 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148540 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148544 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148549 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148554 | orchestrator | 2025-04-01 19:50:10.148559 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.148564 | orchestrator | Tuesday 01 April 2025 19:46:43 +0000 (0:00:00.750) 0:10:30.851 ********* 2025-04-01 19:50:10.148569 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.148574 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-01 19:50:10.148579 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148584 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.148589 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-01 19:50:10.148594 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148599 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.148604 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-01 19:50:10.148609 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148614 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.148619 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.148646 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148651 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.148656 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.148661 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148666 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.148671 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.148676 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148681 | orchestrator | 2025-04-01 19:50:10.148686 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.148691 | orchestrator | Tuesday 01 April 2025 19:46:44 +0000 (0:00:01.074) 0:10:31.925 ********* 2025-04-01 19:50:10.148696 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-01 19:50:10.148704 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-01 19:50:10.148709 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148716 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-01 19:50:10.148721 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-01 19:50:10.148726 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148731 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-01 19:50:10.148736 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-01 19:50:10.148741 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148746 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-01 19:50:10.148754 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-01 19:50:10.148759 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148764 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-01 19:50:10.148769 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-01 19:50:10.148774 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148779 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-01 19:50:10.148783 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-01 19:50:10.148788 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148793 | orchestrator | 2025-04-01 19:50:10.148798 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.148803 | orchestrator | Tuesday 01 April 2025 19:46:45 +0000 (0:00:00.858) 0:10:32.784 ********* 2025-04-01 19:50:10.148808 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148813 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148817 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148822 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148827 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148832 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148837 | orchestrator | 2025-04-01 19:50:10.148842 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.148859 | orchestrator | Tuesday 01 April 2025 19:46:46 +0000 (0:00:01.004) 0:10:33.788 ********* 2025-04-01 19:50:10.148865 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148870 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148875 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148880 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148884 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148889 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148894 | orchestrator | 2025-04-01 19:50:10.148899 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.148905 | orchestrator | Tuesday 01 April 2025 19:46:46 +0000 (0:00:00.725) 0:10:34.514 ********* 2025-04-01 19:50:10.148910 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148915 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148920 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148925 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148929 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148934 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148939 | orchestrator | 2025-04-01 19:50:10.148944 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.148949 | orchestrator | Tuesday 01 April 2025 19:46:48 +0000 (0:00:01.147) 0:10:35.662 ********* 2025-04-01 19:50:10.148954 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.148959 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.148964 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.148969 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.148974 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.148979 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.148984 | orchestrator | 2025-04-01 19:50:10.148989 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.148994 | orchestrator | Tuesday 01 April 2025 19:46:48 +0000 (0:00:00.775) 0:10:36.437 ********* 2025-04-01 19:50:10.148998 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149003 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149008 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149013 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149018 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149023 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149028 | orchestrator | 2025-04-01 19:50:10.149033 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.149041 | orchestrator | Tuesday 01 April 2025 19:46:49 +0000 (0:00:00.935) 0:10:37.373 ********* 2025-04-01 19:50:10.149046 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149051 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149056 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149061 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149066 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149071 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149076 | orchestrator | 2025-04-01 19:50:10.149083 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.149088 | orchestrator | Tuesday 01 April 2025 19:46:50 +0000 (0:00:00.745) 0:10:38.119 ********* 2025-04-01 19:50:10.149093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.149098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.149103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.149108 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149113 | orchestrator | 2025-04-01 19:50:10.149118 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.149123 | orchestrator | Tuesday 01 April 2025 19:46:50 +0000 (0:00:00.464) 0:10:38.584 ********* 2025-04-01 19:50:10.149128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.149133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.149138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.149143 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149148 | orchestrator | 2025-04-01 19:50:10.149153 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.149158 | orchestrator | Tuesday 01 April 2025 19:46:51 +0000 (0:00:00.495) 0:10:39.079 ********* 2025-04-01 19:50:10.149163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.149167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.149172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.149177 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149182 | orchestrator | 2025-04-01 19:50:10.149187 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.149192 | orchestrator | Tuesday 01 April 2025 19:46:52 +0000 (0:00:00.741) 0:10:39.821 ********* 2025-04-01 19:50:10.149197 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149202 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149207 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149212 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149217 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149221 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149226 | orchestrator | 2025-04-01 19:50:10.149231 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.149236 | orchestrator | Tuesday 01 April 2025 19:46:53 +0000 (0:00:01.032) 0:10:40.853 ********* 2025-04-01 19:50:10.149241 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.149246 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149254 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.149259 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149264 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.149269 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149274 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.149278 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149283 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.149300 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149306 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.149316 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149321 | orchestrator | 2025-04-01 19:50:10.149326 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.149331 | orchestrator | Tuesday 01 April 2025 19:46:54 +0000 (0:00:00.846) 0:10:41.699 ********* 2025-04-01 19:50:10.149336 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149341 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149345 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149350 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149355 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149360 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149365 | orchestrator | 2025-04-01 19:50:10.149370 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.149375 | orchestrator | Tuesday 01 April 2025 19:46:55 +0000 (0:00:01.039) 0:10:42.739 ********* 2025-04-01 19:50:10.149380 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149385 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149390 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149395 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149400 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149404 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149409 | orchestrator | 2025-04-01 19:50:10.149414 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.149419 | orchestrator | Tuesday 01 April 2025 19:46:55 +0000 (0:00:00.778) 0:10:43.517 ********* 2025-04-01 19:50:10.149424 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-01 19:50:10.149429 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149434 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-01 19:50:10.149439 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-01 19:50:10.149444 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149449 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149454 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.149459 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149464 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.149469 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149474 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.149478 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149483 | orchestrator | 2025-04-01 19:50:10.149488 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.149493 | orchestrator | Tuesday 01 April 2025 19:46:57 +0000 (0:00:01.522) 0:10:45.039 ********* 2025-04-01 19:50:10.149498 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149503 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149508 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149513 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.149518 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149523 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.149528 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149533 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.149538 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149543 | orchestrator | 2025-04-01 19:50:10.149548 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.149553 | orchestrator | Tuesday 01 April 2025 19:46:58 +0000 (0:00:00.746) 0:10:45.785 ********* 2025-04-01 19:50:10.149558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-01 19:50:10.149563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-01 19:50:10.149571 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-01 19:50:10.149576 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149581 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-01 19:50:10.149586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-01 19:50:10.149591 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-01 19:50:10.149595 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-01 19:50:10.149605 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-01 19:50:10.149610 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-01 19:50:10.149615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.149620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.149637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.149642 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.149652 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.149657 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.149661 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149666 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149671 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.149679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.149684 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.149689 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149694 | orchestrator | 2025-04-01 19:50:10.149699 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.149704 | orchestrator | Tuesday 01 April 2025 19:46:59 +0000 (0:00:01.610) 0:10:47.396 ********* 2025-04-01 19:50:10.149711 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149716 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149721 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149726 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149731 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149736 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149741 | orchestrator | 2025-04-01 19:50:10.149746 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.149750 | orchestrator | Tuesday 01 April 2025 19:47:01 +0000 (0:00:01.592) 0:10:48.988 ********* 2025-04-01 19:50:10.149755 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149760 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149765 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149770 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.149775 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149780 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.149785 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149790 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.149794 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149799 | orchestrator | 2025-04-01 19:50:10.149804 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.149809 | orchestrator | Tuesday 01 April 2025 19:47:02 +0000 (0:00:01.566) 0:10:50.555 ********* 2025-04-01 19:50:10.149814 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149819 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149824 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149828 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149833 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149841 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149849 | orchestrator | 2025-04-01 19:50:10.149854 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.149859 | orchestrator | Tuesday 01 April 2025 19:47:04 +0000 (0:00:01.516) 0:10:52.071 ********* 2025-04-01 19:50:10.149864 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:10.149869 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:10.149874 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:10.149879 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.149884 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.149888 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.149893 | orchestrator | 2025-04-01 19:50:10.149898 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-04-01 19:50:10.149903 | orchestrator | Tuesday 01 April 2025 19:47:06 +0000 (0:00:01.560) 0:10:53.631 ********* 2025-04-01 19:50:10.149908 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.149913 | orchestrator | 2025-04-01 19:50:10.149918 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-04-01 19:50:10.149923 | orchestrator | Tuesday 01 April 2025 19:47:09 +0000 (0:00:03.452) 0:10:57.083 ********* 2025-04-01 19:50:10.149927 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.149932 | orchestrator | 2025-04-01 19:50:10.149940 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-04-01 19:50:10.149945 | orchestrator | Tuesday 01 April 2025 19:47:11 +0000 (0:00:01.631) 0:10:58.715 ********* 2025-04-01 19:50:10.149950 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.149954 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.149959 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.149964 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.149969 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.149974 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.149979 | orchestrator | 2025-04-01 19:50:10.149984 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-04-01 19:50:10.149989 | orchestrator | Tuesday 01 April 2025 19:47:13 +0000 (0:00:01.907) 0:11:00.623 ********* 2025-04-01 19:50:10.149993 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.149998 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.150003 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.150008 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.150025 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.150031 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.150036 | orchestrator | 2025-04-01 19:50:10.150041 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-04-01 19:50:10.150046 | orchestrator | Tuesday 01 April 2025 19:47:14 +0000 (0:00:01.125) 0:11:01.748 ********* 2025-04-01 19:50:10.150051 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.150056 | orchestrator | 2025-04-01 19:50:10.150060 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-04-01 19:50:10.150065 | orchestrator | Tuesday 01 April 2025 19:47:15 +0000 (0:00:01.506) 0:11:03.254 ********* 2025-04-01 19:50:10.150070 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.150075 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.150080 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.150085 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.150090 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.150095 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.150099 | orchestrator | 2025-04-01 19:50:10.150104 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-04-01 19:50:10.150109 | orchestrator | Tuesday 01 April 2025 19:47:17 +0000 (0:00:01.720) 0:11:04.975 ********* 2025-04-01 19:50:10.150114 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.150119 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.150124 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.150132 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.150137 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.150142 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.150147 | orchestrator | 2025-04-01 19:50:10.150151 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-04-01 19:50:10.150156 | orchestrator | Tuesday 01 April 2025 19:47:21 +0000 (0:00:04.032) 0:11:09.008 ********* 2025-04-01 19:50:10.150165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.150170 | orchestrator | 2025-04-01 19:50:10.150175 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-04-01 19:50:10.150180 | orchestrator | Tuesday 01 April 2025 19:47:22 +0000 (0:00:01.589) 0:11:10.597 ********* 2025-04-01 19:50:10.150185 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.150190 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.150195 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.150200 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150205 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150210 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150215 | orchestrator | 2025-04-01 19:50:10.150219 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-04-01 19:50:10.150224 | orchestrator | Tuesday 01 April 2025 19:47:23 +0000 (0:00:00.784) 0:11:11.382 ********* 2025-04-01 19:50:10.150229 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:10.150234 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:10.150239 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:10.150244 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.150249 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.150254 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.150258 | orchestrator | 2025-04-01 19:50:10.150263 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-04-01 19:50:10.150268 | orchestrator | Tuesday 01 April 2025 19:47:26 +0000 (0:00:02.806) 0:11:14.188 ********* 2025-04-01 19:50:10.150273 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:10.150278 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:10.150283 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:10.150288 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150293 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150300 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150305 | orchestrator | 2025-04-01 19:50:10.150310 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-04-01 19:50:10.150315 | orchestrator | 2025-04-01 19:50:10.150320 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.150325 | orchestrator | Tuesday 01 April 2025 19:47:29 +0000 (0:00:03.034) 0:11:17.223 ********* 2025-04-01 19:50:10.150330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.150338 | orchestrator | 2025-04-01 19:50:10.150343 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.150348 | orchestrator | Tuesday 01 April 2025 19:47:30 +0000 (0:00:00.849) 0:11:18.073 ********* 2025-04-01 19:50:10.150353 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150358 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150363 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150368 | orchestrator | 2025-04-01 19:50:10.150373 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.150377 | orchestrator | Tuesday 01 April 2025 19:47:30 +0000 (0:00:00.413) 0:11:18.486 ********* 2025-04-01 19:50:10.150382 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150387 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150392 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150397 | orchestrator | 2025-04-01 19:50:10.150402 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.150414 | orchestrator | Tuesday 01 April 2025 19:47:31 +0000 (0:00:00.807) 0:11:19.294 ********* 2025-04-01 19:50:10.150419 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150424 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150429 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150433 | orchestrator | 2025-04-01 19:50:10.150438 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.150443 | orchestrator | Tuesday 01 April 2025 19:47:32 +0000 (0:00:00.788) 0:11:20.083 ********* 2025-04-01 19:50:10.150448 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150453 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150458 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150463 | orchestrator | 2025-04-01 19:50:10.150468 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.150473 | orchestrator | Tuesday 01 April 2025 19:47:33 +0000 (0:00:01.125) 0:11:21.208 ********* 2025-04-01 19:50:10.150478 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150482 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150487 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150492 | orchestrator | 2025-04-01 19:50:10.150500 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.150505 | orchestrator | Tuesday 01 April 2025 19:47:33 +0000 (0:00:00.373) 0:11:21.582 ********* 2025-04-01 19:50:10.150510 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150515 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150520 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150525 | orchestrator | 2025-04-01 19:50:10.150530 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.150535 | orchestrator | Tuesday 01 April 2025 19:47:34 +0000 (0:00:00.411) 0:11:21.993 ********* 2025-04-01 19:50:10.150539 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150544 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150549 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150554 | orchestrator | 2025-04-01 19:50:10.150559 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.150564 | orchestrator | Tuesday 01 April 2025 19:47:34 +0000 (0:00:00.422) 0:11:22.416 ********* 2025-04-01 19:50:10.150569 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150574 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150579 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150584 | orchestrator | 2025-04-01 19:50:10.150588 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.150593 | orchestrator | Tuesday 01 April 2025 19:47:35 +0000 (0:00:00.716) 0:11:23.132 ********* 2025-04-01 19:50:10.150598 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150603 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150608 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150613 | orchestrator | 2025-04-01 19:50:10.150618 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.150633 | orchestrator | Tuesday 01 April 2025 19:47:35 +0000 (0:00:00.429) 0:11:23.562 ********* 2025-04-01 19:50:10.150639 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150644 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150649 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150654 | orchestrator | 2025-04-01 19:50:10.150658 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.150663 | orchestrator | Tuesday 01 April 2025 19:47:36 +0000 (0:00:00.410) 0:11:23.973 ********* 2025-04-01 19:50:10.150668 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150673 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150678 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150683 | orchestrator | 2025-04-01 19:50:10.150688 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.150693 | orchestrator | Tuesday 01 April 2025 19:47:37 +0000 (0:00:00.759) 0:11:24.732 ********* 2025-04-01 19:50:10.150704 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150709 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150714 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150719 | orchestrator | 2025-04-01 19:50:10.150724 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.150729 | orchestrator | Tuesday 01 April 2025 19:47:37 +0000 (0:00:00.666) 0:11:25.399 ********* 2025-04-01 19:50:10.150734 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150739 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150743 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150748 | orchestrator | 2025-04-01 19:50:10.150753 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.150758 | orchestrator | Tuesday 01 April 2025 19:47:38 +0000 (0:00:00.371) 0:11:25.771 ********* 2025-04-01 19:50:10.150763 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150768 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150773 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150778 | orchestrator | 2025-04-01 19:50:10.150783 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.150787 | orchestrator | Tuesday 01 April 2025 19:47:38 +0000 (0:00:00.381) 0:11:26.152 ********* 2025-04-01 19:50:10.150792 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150797 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150802 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150807 | orchestrator | 2025-04-01 19:50:10.150812 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.150817 | orchestrator | Tuesday 01 April 2025 19:47:38 +0000 (0:00:00.365) 0:11:26.517 ********* 2025-04-01 19:50:10.150822 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150827 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150832 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150836 | orchestrator | 2025-04-01 19:50:10.150841 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.150846 | orchestrator | Tuesday 01 April 2025 19:47:39 +0000 (0:00:00.718) 0:11:27.235 ********* 2025-04-01 19:50:10.150851 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150856 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150864 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150869 | orchestrator | 2025-04-01 19:50:10.150874 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.150879 | orchestrator | Tuesday 01 April 2025 19:47:40 +0000 (0:00:00.379) 0:11:27.615 ********* 2025-04-01 19:50:10.150884 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150889 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150894 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150899 | orchestrator | 2025-04-01 19:50:10.150904 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.150909 | orchestrator | Tuesday 01 April 2025 19:47:40 +0000 (0:00:00.488) 0:11:28.103 ********* 2025-04-01 19:50:10.150914 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150918 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150923 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150928 | orchestrator | 2025-04-01 19:50:10.150933 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.150938 | orchestrator | Tuesday 01 April 2025 19:47:41 +0000 (0:00:00.550) 0:11:28.654 ********* 2025-04-01 19:50:10.150943 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.150947 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.150952 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.150957 | orchestrator | 2025-04-01 19:50:10.150962 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.150967 | orchestrator | Tuesday 01 April 2025 19:47:41 +0000 (0:00:00.821) 0:11:29.475 ********* 2025-04-01 19:50:10.150972 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.150981 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.150986 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.150990 | orchestrator | 2025-04-01 19:50:10.150998 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.151003 | orchestrator | Tuesday 01 April 2025 19:47:42 +0000 (0:00:00.458) 0:11:29.933 ********* 2025-04-01 19:50:10.151008 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151013 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151018 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151023 | orchestrator | 2025-04-01 19:50:10.151027 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.151032 | orchestrator | Tuesday 01 April 2025 19:47:42 +0000 (0:00:00.396) 0:11:30.330 ********* 2025-04-01 19:50:10.151037 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151042 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151047 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151052 | orchestrator | 2025-04-01 19:50:10.151057 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.151062 | orchestrator | Tuesday 01 April 2025 19:47:43 +0000 (0:00:00.328) 0:11:30.658 ********* 2025-04-01 19:50:10.151067 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151071 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151076 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151081 | orchestrator | 2025-04-01 19:50:10.151086 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.151093 | orchestrator | Tuesday 01 April 2025 19:47:43 +0000 (0:00:00.720) 0:11:31.379 ********* 2025-04-01 19:50:10.151098 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151103 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151108 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151113 | orchestrator | 2025-04-01 19:50:10.151118 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.151123 | orchestrator | Tuesday 01 April 2025 19:47:44 +0000 (0:00:00.450) 0:11:31.829 ********* 2025-04-01 19:50:10.151128 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151133 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151138 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151142 | orchestrator | 2025-04-01 19:50:10.151147 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.151152 | orchestrator | Tuesday 01 April 2025 19:47:44 +0000 (0:00:00.393) 0:11:32.223 ********* 2025-04-01 19:50:10.151157 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151162 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151167 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151172 | orchestrator | 2025-04-01 19:50:10.151177 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.151182 | orchestrator | Tuesday 01 April 2025 19:47:45 +0000 (0:00:00.398) 0:11:32.621 ********* 2025-04-01 19:50:10.151187 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151192 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151196 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151201 | orchestrator | 2025-04-01 19:50:10.151206 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.151211 | orchestrator | Tuesday 01 April 2025 19:47:45 +0000 (0:00:00.852) 0:11:33.474 ********* 2025-04-01 19:50:10.151220 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151225 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151230 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151235 | orchestrator | 2025-04-01 19:50:10.151240 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.151245 | orchestrator | Tuesday 01 April 2025 19:47:46 +0000 (0:00:00.438) 0:11:33.912 ********* 2025-04-01 19:50:10.151250 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151258 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151263 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151268 | orchestrator | 2025-04-01 19:50:10.151273 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.151278 | orchestrator | Tuesday 01 April 2025 19:47:46 +0000 (0:00:00.409) 0:11:34.322 ********* 2025-04-01 19:50:10.151283 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151288 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151293 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151297 | orchestrator | 2025-04-01 19:50:10.151302 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.151307 | orchestrator | Tuesday 01 April 2025 19:47:47 +0000 (0:00:00.435) 0:11:34.757 ********* 2025-04-01 19:50:10.151312 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151317 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151322 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151327 | orchestrator | 2025-04-01 19:50:10.151332 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.151337 | orchestrator | Tuesday 01 April 2025 19:47:47 +0000 (0:00:00.850) 0:11:35.608 ********* 2025-04-01 19:50:10.151342 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.151347 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.151352 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151356 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.151361 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.151366 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151371 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.151379 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.151384 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151389 | orchestrator | 2025-04-01 19:50:10.151393 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.151398 | orchestrator | Tuesday 01 April 2025 19:47:48 +0000 (0:00:00.482) 0:11:36.090 ********* 2025-04-01 19:50:10.151403 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-01 19:50:10.151408 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-01 19:50:10.151413 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151418 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-01 19:50:10.151423 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-01 19:50:10.151428 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151435 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-01 19:50:10.151440 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-01 19:50:10.151445 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151450 | orchestrator | 2025-04-01 19:50:10.151455 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.151460 | orchestrator | Tuesday 01 April 2025 19:47:48 +0000 (0:00:00.433) 0:11:36.524 ********* 2025-04-01 19:50:10.151465 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151469 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151474 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151479 | orchestrator | 2025-04-01 19:50:10.151484 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.151489 | orchestrator | Tuesday 01 April 2025 19:47:49 +0000 (0:00:00.386) 0:11:36.910 ********* 2025-04-01 19:50:10.151494 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151499 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151504 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151508 | orchestrator | 2025-04-01 19:50:10.151516 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.151525 | orchestrator | Tuesday 01 April 2025 19:47:50 +0000 (0:00:00.733) 0:11:37.644 ********* 2025-04-01 19:50:10.151530 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151535 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151539 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151544 | orchestrator | 2025-04-01 19:50:10.151549 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.151554 | orchestrator | Tuesday 01 April 2025 19:47:50 +0000 (0:00:00.449) 0:11:38.094 ********* 2025-04-01 19:50:10.151559 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151564 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151569 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151573 | orchestrator | 2025-04-01 19:50:10.151578 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.151583 | orchestrator | Tuesday 01 April 2025 19:47:50 +0000 (0:00:00.374) 0:11:38.468 ********* 2025-04-01 19:50:10.151588 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151593 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151598 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151603 | orchestrator | 2025-04-01 19:50:10.151608 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.151616 | orchestrator | Tuesday 01 April 2025 19:47:51 +0000 (0:00:00.372) 0:11:38.841 ********* 2025-04-01 19:50:10.151621 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151648 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151653 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151658 | orchestrator | 2025-04-01 19:50:10.151663 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.151667 | orchestrator | Tuesday 01 April 2025 19:47:51 +0000 (0:00:00.711) 0:11:39.553 ********* 2025-04-01 19:50:10.151672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.151677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.151682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.151687 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151692 | orchestrator | 2025-04-01 19:50:10.151697 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.151702 | orchestrator | Tuesday 01 April 2025 19:47:52 +0000 (0:00:00.506) 0:11:40.059 ********* 2025-04-01 19:50:10.151707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.151711 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.151716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.151721 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151726 | orchestrator | 2025-04-01 19:50:10.151731 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.151736 | orchestrator | Tuesday 01 April 2025 19:47:53 +0000 (0:00:00.648) 0:11:40.708 ********* 2025-04-01 19:50:10.151741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.151746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.151750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.151755 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151760 | orchestrator | 2025-04-01 19:50:10.151765 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.151770 | orchestrator | Tuesday 01 April 2025 19:47:53 +0000 (0:00:00.695) 0:11:41.404 ********* 2025-04-01 19:50:10.151775 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151780 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151785 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151789 | orchestrator | 2025-04-01 19:50:10.151794 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.151803 | orchestrator | Tuesday 01 April 2025 19:47:54 +0000 (0:00:00.389) 0:11:41.794 ********* 2025-04-01 19:50:10.151808 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.151813 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151818 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.151823 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151828 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.151832 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151837 | orchestrator | 2025-04-01 19:50:10.151842 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.151847 | orchestrator | Tuesday 01 April 2025 19:47:55 +0000 (0:00:00.918) 0:11:42.712 ********* 2025-04-01 19:50:10.151852 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151857 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151862 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151867 | orchestrator | 2025-04-01 19:50:10.151871 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.151876 | orchestrator | Tuesday 01 April 2025 19:47:55 +0000 (0:00:00.682) 0:11:43.395 ********* 2025-04-01 19:50:10.151881 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151886 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151891 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151896 | orchestrator | 2025-04-01 19:50:10.151901 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.151906 | orchestrator | Tuesday 01 April 2025 19:47:56 +0000 (0:00:00.455) 0:11:43.850 ********* 2025-04-01 19:50:10.151911 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.151915 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151920 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.151925 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151930 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.151935 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151940 | orchestrator | 2025-04-01 19:50:10.151945 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.151950 | orchestrator | Tuesday 01 April 2025 19:47:56 +0000 (0:00:00.618) 0:11:44.469 ********* 2025-04-01 19:50:10.151959 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.151964 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.151969 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.151974 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.151979 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.151984 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.151989 | orchestrator | 2025-04-01 19:50:10.151994 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.151999 | orchestrator | Tuesday 01 April 2025 19:47:57 +0000 (0:00:00.765) 0:11:45.235 ********* 2025-04-01 19:50:10.152004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.152009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.152014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.152019 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152024 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.152029 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.152034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.152038 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152043 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.152052 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.152057 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.152062 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152066 | orchestrator | 2025-04-01 19:50:10.152071 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.152076 | orchestrator | Tuesday 01 April 2025 19:47:58 +0000 (0:00:00.640) 0:11:45.875 ********* 2025-04-01 19:50:10.152081 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152086 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152091 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152096 | orchestrator | 2025-04-01 19:50:10.152101 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.152106 | orchestrator | Tuesday 01 April 2025 19:47:59 +0000 (0:00:00.774) 0:11:46.650 ********* 2025-04-01 19:50:10.152111 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.152115 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152120 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.152125 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152130 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.152135 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152140 | orchestrator | 2025-04-01 19:50:10.152144 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.152149 | orchestrator | Tuesday 01 April 2025 19:47:59 +0000 (0:00:00.778) 0:11:47.428 ********* 2025-04-01 19:50:10.152154 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152159 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152164 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152169 | orchestrator | 2025-04-01 19:50:10.152174 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.152179 | orchestrator | Tuesday 01 April 2025 19:48:01 +0000 (0:00:01.320) 0:11:48.749 ********* 2025-04-01 19:50:10.152184 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152190 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152194 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152199 | orchestrator | 2025-04-01 19:50:10.152204 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-04-01 19:50:10.152209 | orchestrator | Tuesday 01 April 2025 19:48:01 +0000 (0:00:00.779) 0:11:49.528 ********* 2025-04-01 19:50:10.152214 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152219 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152224 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-04-01 19:50:10.152229 | orchestrator | 2025-04-01 19:50:10.152234 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-04-01 19:50:10.152241 | orchestrator | Tuesday 01 April 2025 19:48:02 +0000 (0:00:00.644) 0:11:50.173 ********* 2025-04-01 19:50:10.152246 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.152251 | orchestrator | 2025-04-01 19:50:10.152256 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-04-01 19:50:10.152261 | orchestrator | Tuesday 01 April 2025 19:48:04 +0000 (0:00:01.921) 0:11:52.095 ********* 2025-04-01 19:50:10.152267 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-04-01 19:50:10.152273 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152278 | orchestrator | 2025-04-01 19:50:10.152283 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-04-01 19:50:10.152288 | orchestrator | Tuesday 01 April 2025 19:48:04 +0000 (0:00:00.383) 0:11:52.479 ********* 2025-04-01 19:50:10.152294 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:50:10.152305 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:50:10.152310 | orchestrator | 2025-04-01 19:50:10.152315 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-04-01 19:50:10.152320 | orchestrator | Tuesday 01 April 2025 19:48:11 +0000 (0:00:06.217) 0:11:58.696 ********* 2025-04-01 19:50:10.152325 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-01 19:50:10.152330 | orchestrator | 2025-04-01 19:50:10.152335 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-04-01 19:50:10.152340 | orchestrator | Tuesday 01 April 2025 19:48:14 +0000 (0:00:02.952) 0:12:01.649 ********* 2025-04-01 19:50:10.152345 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.152350 | orchestrator | 2025-04-01 19:50:10.152355 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-04-01 19:50:10.152360 | orchestrator | Tuesday 01 April 2025 19:48:14 +0000 (0:00:00.662) 0:12:02.311 ********* 2025-04-01 19:50:10.152364 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-01 19:50:10.152369 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-04-01 19:50:10.152374 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-01 19:50:10.152379 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-01 19:50:10.152384 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-04-01 19:50:10.152389 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-04-01 19:50:10.152394 | orchestrator | 2025-04-01 19:50:10.152399 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-04-01 19:50:10.152404 | orchestrator | Tuesday 01 April 2025 19:48:16 +0000 (0:00:01.413) 0:12:03.724 ********* 2025-04-01 19:50:10.152408 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:50:10.152413 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.152418 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-01 19:50:10.152423 | orchestrator | 2025-04-01 19:50:10.152428 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-04-01 19:50:10.152433 | orchestrator | Tuesday 01 April 2025 19:48:17 +0000 (0:00:01.608) 0:12:05.333 ********* 2025-04-01 19:50:10.152438 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-01 19:50:10.152443 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.152448 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152453 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-01 19:50:10.152457 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-01 19:50:10.152462 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.152467 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152472 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.152477 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152482 | orchestrator | 2025-04-01 19:50:10.152487 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-04-01 19:50:10.152492 | orchestrator | Tuesday 01 April 2025 19:48:18 +0000 (0:00:01.081) 0:12:06.415 ********* 2025-04-01 19:50:10.152497 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152501 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.152506 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.152515 | orchestrator | 2025-04-01 19:50:10.152520 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-04-01 19:50:10.152524 | orchestrator | Tuesday 01 April 2025 19:48:19 +0000 (0:00:00.367) 0:12:06.782 ********* 2025-04-01 19:50:10.152529 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.152534 | orchestrator | 2025-04-01 19:50:10.152539 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-04-01 19:50:10.152544 | orchestrator | Tuesday 01 April 2025 19:48:20 +0000 (0:00:00.993) 0:12:07.776 ********* 2025-04-01 19:50:10.152549 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.152554 | orchestrator | 2025-04-01 19:50:10.152559 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-04-01 19:50:10.152564 | orchestrator | Tuesday 01 April 2025 19:48:20 +0000 (0:00:00.660) 0:12:08.437 ********* 2025-04-01 19:50:10.152569 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152574 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152579 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152584 | orchestrator | 2025-04-01 19:50:10.152588 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-04-01 19:50:10.152593 | orchestrator | Tuesday 01 April 2025 19:48:22 +0000 (0:00:01.565) 0:12:10.002 ********* 2025-04-01 19:50:10.152598 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152603 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152608 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152613 | orchestrator | 2025-04-01 19:50:10.152618 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-04-01 19:50:10.152641 | orchestrator | Tuesday 01 April 2025 19:48:23 +0000 (0:00:01.156) 0:12:11.159 ********* 2025-04-01 19:50:10.152647 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152652 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152657 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152662 | orchestrator | 2025-04-01 19:50:10.152672 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-04-01 19:50:10.152677 | orchestrator | Tuesday 01 April 2025 19:48:25 +0000 (0:00:01.622) 0:12:12.781 ********* 2025-04-01 19:50:10.152682 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152687 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152692 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152697 | orchestrator | 2025-04-01 19:50:10.152702 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-04-01 19:50:10.152707 | orchestrator | Tuesday 01 April 2025 19:48:27 +0000 (0:00:02.156) 0:12:14.937 ********* 2025-04-01 19:50:10.152712 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-04-01 19:50:10.152717 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-04-01 19:50:10.152722 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-04-01 19:50:10.152727 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.152732 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.152737 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.152742 | orchestrator | 2025-04-01 19:50:10.152746 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-01 19:50:10.152751 | orchestrator | Tuesday 01 April 2025 19:48:44 +0000 (0:00:17.077) 0:12:32.015 ********* 2025-04-01 19:50:10.152756 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152761 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152766 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152771 | orchestrator | 2025-04-01 19:50:10.152776 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-01 19:50:10.152781 | orchestrator | Tuesday 01 April 2025 19:48:45 +0000 (0:00:00.672) 0:12:32.687 ********* 2025-04-01 19:50:10.152789 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.152794 | orchestrator | 2025-04-01 19:50:10.152799 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-01 19:50:10.152804 | orchestrator | Tuesday 01 April 2025 19:48:46 +0000 (0:00:00.997) 0:12:33.685 ********* 2025-04-01 19:50:10.152809 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.152814 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.152819 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.152824 | orchestrator | 2025-04-01 19:50:10.152829 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-01 19:50:10.152834 | orchestrator | Tuesday 01 April 2025 19:48:46 +0000 (0:00:00.388) 0:12:34.073 ********* 2025-04-01 19:50:10.152839 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152844 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152849 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152854 | orchestrator | 2025-04-01 19:50:10.152859 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-01 19:50:10.152864 | orchestrator | Tuesday 01 April 2025 19:48:47 +0000 (0:00:01.257) 0:12:35.331 ********* 2025-04-01 19:50:10.152869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.152874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.152879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.152884 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.152889 | orchestrator | 2025-04-01 19:50:10.152894 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-01 19:50:10.152898 | orchestrator | Tuesday 01 April 2025 19:48:48 +0000 (0:00:01.075) 0:12:36.406 ********* 2025-04-01 19:50:10.152903 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.152908 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.152913 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.152918 | orchestrator | 2025-04-01 19:50:10.152923 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.152928 | orchestrator | Tuesday 01 April 2025 19:48:49 +0000 (0:00:00.748) 0:12:37.155 ********* 2025-04-01 19:50:10.152933 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.152938 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.152943 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.152948 | orchestrator | 2025-04-01 19:50:10.152952 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-01 19:50:10.152957 | orchestrator | 2025-04-01 19:50:10.152962 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-01 19:50:10.152967 | orchestrator | Tuesday 01 April 2025 19:48:51 +0000 (0:00:02.261) 0:12:39.416 ********* 2025-04-01 19:50:10.152972 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.152980 | orchestrator | 2025-04-01 19:50:10.152985 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-01 19:50:10.152990 | orchestrator | Tuesday 01 April 2025 19:48:52 +0000 (0:00:00.954) 0:12:40.371 ********* 2025-04-01 19:50:10.152995 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153000 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153005 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153010 | orchestrator | 2025-04-01 19:50:10.153015 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-01 19:50:10.153020 | orchestrator | Tuesday 01 April 2025 19:48:53 +0000 (0:00:00.369) 0:12:40.740 ********* 2025-04-01 19:50:10.153025 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153030 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153034 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153039 | orchestrator | 2025-04-01 19:50:10.153044 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-01 19:50:10.153053 | orchestrator | Tuesday 01 April 2025 19:48:53 +0000 (0:00:00.719) 0:12:41.460 ********* 2025-04-01 19:50:10.153058 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153066 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153071 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153076 | orchestrator | 2025-04-01 19:50:10.153081 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-01 19:50:10.153088 | orchestrator | Tuesday 01 April 2025 19:48:54 +0000 (0:00:00.739) 0:12:42.200 ********* 2025-04-01 19:50:10.153093 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153098 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153103 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153108 | orchestrator | 2025-04-01 19:50:10.153116 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-01 19:50:10.153121 | orchestrator | Tuesday 01 April 2025 19:48:55 +0000 (0:00:01.196) 0:12:43.397 ********* 2025-04-01 19:50:10.153126 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153131 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153136 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153141 | orchestrator | 2025-04-01 19:50:10.153146 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-01 19:50:10.153150 | orchestrator | Tuesday 01 April 2025 19:48:56 +0000 (0:00:00.370) 0:12:43.768 ********* 2025-04-01 19:50:10.153155 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153160 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153165 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153170 | orchestrator | 2025-04-01 19:50:10.153175 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-01 19:50:10.153180 | orchestrator | Tuesday 01 April 2025 19:48:56 +0000 (0:00:00.354) 0:12:44.122 ********* 2025-04-01 19:50:10.153185 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153190 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153195 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153200 | orchestrator | 2025-04-01 19:50:10.153205 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-01 19:50:10.153214 | orchestrator | Tuesday 01 April 2025 19:48:56 +0000 (0:00:00.374) 0:12:44.496 ********* 2025-04-01 19:50:10.153219 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153224 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153229 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153234 | orchestrator | 2025-04-01 19:50:10.153239 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-01 19:50:10.153244 | orchestrator | Tuesday 01 April 2025 19:48:57 +0000 (0:00:00.665) 0:12:45.162 ********* 2025-04-01 19:50:10.153249 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153254 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153259 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153264 | orchestrator | 2025-04-01 19:50:10.153269 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-01 19:50:10.153274 | orchestrator | Tuesday 01 April 2025 19:48:57 +0000 (0:00:00.384) 0:12:45.547 ********* 2025-04-01 19:50:10.153278 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153283 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153288 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153296 | orchestrator | 2025-04-01 19:50:10.153301 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-01 19:50:10.153306 | orchestrator | Tuesday 01 April 2025 19:48:58 +0000 (0:00:00.351) 0:12:45.899 ********* 2025-04-01 19:50:10.153311 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153316 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153321 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153326 | orchestrator | 2025-04-01 19:50:10.153331 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-01 19:50:10.153336 | orchestrator | Tuesday 01 April 2025 19:48:58 +0000 (0:00:00.677) 0:12:46.576 ********* 2025-04-01 19:50:10.153344 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153349 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153354 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153359 | orchestrator | 2025-04-01 19:50:10.153364 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-01 19:50:10.153369 | orchestrator | Tuesday 01 April 2025 19:48:59 +0000 (0:00:00.668) 0:12:47.244 ********* 2025-04-01 19:50:10.153374 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153379 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153384 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153389 | orchestrator | 2025-04-01 19:50:10.153393 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-01 19:50:10.153398 | orchestrator | Tuesday 01 April 2025 19:48:59 +0000 (0:00:00.343) 0:12:47.588 ********* 2025-04-01 19:50:10.153403 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153408 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153413 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153418 | orchestrator | 2025-04-01 19:50:10.153423 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-01 19:50:10.153428 | orchestrator | Tuesday 01 April 2025 19:49:00 +0000 (0:00:00.420) 0:12:48.009 ********* 2025-04-01 19:50:10.153433 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153438 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153443 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153447 | orchestrator | 2025-04-01 19:50:10.153452 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-01 19:50:10.153457 | orchestrator | Tuesday 01 April 2025 19:49:00 +0000 (0:00:00.359) 0:12:48.369 ********* 2025-04-01 19:50:10.153462 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153467 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153472 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153477 | orchestrator | 2025-04-01 19:50:10.153482 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-01 19:50:10.153487 | orchestrator | Tuesday 01 April 2025 19:49:01 +0000 (0:00:00.728) 0:12:49.097 ********* 2025-04-01 19:50:10.153492 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153497 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153502 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153506 | orchestrator | 2025-04-01 19:50:10.153511 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-01 19:50:10.153516 | orchestrator | Tuesday 01 April 2025 19:49:01 +0000 (0:00:00.341) 0:12:49.439 ********* 2025-04-01 19:50:10.153521 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153526 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153531 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153536 | orchestrator | 2025-04-01 19:50:10.153541 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-01 19:50:10.153546 | orchestrator | Tuesday 01 April 2025 19:49:02 +0000 (0:00:00.357) 0:12:49.796 ********* 2025-04-01 19:50:10.153553 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153558 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153563 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153568 | orchestrator | 2025-04-01 19:50:10.153573 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-01 19:50:10.153578 | orchestrator | Tuesday 01 April 2025 19:49:02 +0000 (0:00:00.332) 0:12:50.129 ********* 2025-04-01 19:50:10.153583 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.153591 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.153596 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.153601 | orchestrator | 2025-04-01 19:50:10.153608 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-01 19:50:10.153613 | orchestrator | Tuesday 01 April 2025 19:49:03 +0000 (0:00:00.709) 0:12:50.839 ********* 2025-04-01 19:50:10.153618 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153647 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153653 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153658 | orchestrator | 2025-04-01 19:50:10.153663 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-01 19:50:10.153668 | orchestrator | Tuesday 01 April 2025 19:49:03 +0000 (0:00:00.363) 0:12:51.203 ********* 2025-04-01 19:50:10.153673 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153678 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153683 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153688 | orchestrator | 2025-04-01 19:50:10.153692 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-01 19:50:10.153697 | orchestrator | Tuesday 01 April 2025 19:49:03 +0000 (0:00:00.377) 0:12:51.580 ********* 2025-04-01 19:50:10.153702 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153707 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153712 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153717 | orchestrator | 2025-04-01 19:50:10.153722 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-01 19:50:10.153727 | orchestrator | Tuesday 01 April 2025 19:49:04 +0000 (0:00:00.354) 0:12:51.934 ********* 2025-04-01 19:50:10.153732 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153737 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153742 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153746 | orchestrator | 2025-04-01 19:50:10.153751 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-01 19:50:10.153756 | orchestrator | Tuesday 01 April 2025 19:49:05 +0000 (0:00:00.683) 0:12:52.618 ********* 2025-04-01 19:50:10.153761 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153767 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153771 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153776 | orchestrator | 2025-04-01 19:50:10.153781 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-01 19:50:10.153786 | orchestrator | Tuesday 01 April 2025 19:49:05 +0000 (0:00:00.369) 0:12:52.988 ********* 2025-04-01 19:50:10.153791 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153796 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153801 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153806 | orchestrator | 2025-04-01 19:50:10.153811 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-01 19:50:10.153816 | orchestrator | Tuesday 01 April 2025 19:49:05 +0000 (0:00:00.340) 0:12:53.329 ********* 2025-04-01 19:50:10.153821 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153826 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153831 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153836 | orchestrator | 2025-04-01 19:50:10.153840 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-01 19:50:10.153845 | orchestrator | Tuesday 01 April 2025 19:49:06 +0000 (0:00:00.387) 0:12:53.716 ********* 2025-04-01 19:50:10.153850 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153855 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153860 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153865 | orchestrator | 2025-04-01 19:50:10.153870 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-01 19:50:10.153875 | orchestrator | Tuesday 01 April 2025 19:49:06 +0000 (0:00:00.671) 0:12:54.388 ********* 2025-04-01 19:50:10.153880 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153885 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153890 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153895 | orchestrator | 2025-04-01 19:50:10.153900 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-01 19:50:10.153905 | orchestrator | Tuesday 01 April 2025 19:49:07 +0000 (0:00:00.362) 0:12:54.750 ********* 2025-04-01 19:50:10.153910 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153918 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153923 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153928 | orchestrator | 2025-04-01 19:50:10.153933 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-01 19:50:10.153938 | orchestrator | Tuesday 01 April 2025 19:49:07 +0000 (0:00:00.381) 0:12:55.132 ********* 2025-04-01 19:50:10.153943 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153948 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153952 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153957 | orchestrator | 2025-04-01 19:50:10.153962 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-01 19:50:10.153967 | orchestrator | Tuesday 01 April 2025 19:49:07 +0000 (0:00:00.365) 0:12:55.497 ********* 2025-04-01 19:50:10.153972 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.153977 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.153982 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.153987 | orchestrator | 2025-04-01 19:50:10.153992 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-01 19:50:10.153997 | orchestrator | Tuesday 01 April 2025 19:49:08 +0000 (0:00:00.678) 0:12:56.175 ********* 2025-04-01 19:50:10.154002 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.154007 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-01 19:50:10.154012 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154041 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.154046 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-01 19:50:10.154051 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154056 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.154061 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-01 19:50:10.154066 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154071 | orchestrator | 2025-04-01 19:50:10.154076 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-01 19:50:10.154081 | orchestrator | Tuesday 01 April 2025 19:49:08 +0000 (0:00:00.421) 0:12:56.597 ********* 2025-04-01 19:50:10.154086 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-01 19:50:10.154094 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-01 19:50:10.154099 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154104 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-01 19:50:10.154109 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-01 19:50:10.154114 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154119 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-01 19:50:10.154123 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-01 19:50:10.154128 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154133 | orchestrator | 2025-04-01 19:50:10.154138 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-01 19:50:10.154143 | orchestrator | Tuesday 01 April 2025 19:49:09 +0000 (0:00:00.437) 0:12:57.035 ********* 2025-04-01 19:50:10.154148 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154153 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154158 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154163 | orchestrator | 2025-04-01 19:50:10.154167 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-01 19:50:10.154172 | orchestrator | Tuesday 01 April 2025 19:49:09 +0000 (0:00:00.363) 0:12:57.398 ********* 2025-04-01 19:50:10.154177 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154182 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154187 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154192 | orchestrator | 2025-04-01 19:50:10.154197 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:50:10.154205 | orchestrator | Tuesday 01 April 2025 19:49:10 +0000 (0:00:00.664) 0:12:58.062 ********* 2025-04-01 19:50:10.154210 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154217 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154222 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154227 | orchestrator | 2025-04-01 19:50:10.154232 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:50:10.154237 | orchestrator | Tuesday 01 April 2025 19:49:10 +0000 (0:00:00.434) 0:12:58.496 ********* 2025-04-01 19:50:10.154242 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154247 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154252 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154256 | orchestrator | 2025-04-01 19:50:10.154261 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:50:10.154266 | orchestrator | Tuesday 01 April 2025 19:49:11 +0000 (0:00:00.385) 0:12:58.882 ********* 2025-04-01 19:50:10.154271 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154276 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154281 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154286 | orchestrator | 2025-04-01 19:50:10.154291 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:50:10.154296 | orchestrator | Tuesday 01 April 2025 19:49:11 +0000 (0:00:00.365) 0:12:59.247 ********* 2025-04-01 19:50:10.154300 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154305 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154310 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154315 | orchestrator | 2025-04-01 19:50:10.154320 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:50:10.154325 | orchestrator | Tuesday 01 April 2025 19:49:12 +0000 (0:00:00.678) 0:12:59.925 ********* 2025-04-01 19:50:10.154330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.154335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.154339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.154344 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154349 | orchestrator | 2025-04-01 19:50:10.154354 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:50:10.154359 | orchestrator | Tuesday 01 April 2025 19:49:12 +0000 (0:00:00.467) 0:13:00.393 ********* 2025-04-01 19:50:10.154364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.154369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.154374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.154378 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154383 | orchestrator | 2025-04-01 19:50:10.154388 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:50:10.154393 | orchestrator | Tuesday 01 April 2025 19:49:13 +0000 (0:00:00.444) 0:13:00.837 ********* 2025-04-01 19:50:10.154398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.154403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.154408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.154413 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154418 | orchestrator | 2025-04-01 19:50:10.154422 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.154427 | orchestrator | Tuesday 01 April 2025 19:49:13 +0000 (0:00:00.459) 0:13:01.297 ********* 2025-04-01 19:50:10.154432 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154437 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154443 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154449 | orchestrator | 2025-04-01 19:50:10.154454 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:50:10.154462 | orchestrator | Tuesday 01 April 2025 19:49:14 +0000 (0:00:00.372) 0:13:01.669 ********* 2025-04-01 19:50:10.154467 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.154472 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154477 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.154482 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154487 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.154492 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154497 | orchestrator | 2025-04-01 19:50:10.154502 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:50:10.154506 | orchestrator | Tuesday 01 April 2025 19:49:14 +0000 (0:00:00.767) 0:13:02.437 ********* 2025-04-01 19:50:10.154511 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154516 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154521 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154526 | orchestrator | 2025-04-01 19:50:10.154531 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:50:10.154536 | orchestrator | Tuesday 01 April 2025 19:49:15 +0000 (0:00:00.358) 0:13:02.795 ********* 2025-04-01 19:50:10.154541 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154545 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154550 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154555 | orchestrator | 2025-04-01 19:50:10.154560 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:50:10.154565 | orchestrator | Tuesday 01 April 2025 19:49:15 +0000 (0:00:00.374) 0:13:03.170 ********* 2025-04-01 19:50:10.154570 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:50:10.154575 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154580 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:50:10.154585 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154590 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:50:10.154594 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154599 | orchestrator | 2025-04-01 19:50:10.154604 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:50:10.154609 | orchestrator | Tuesday 01 April 2025 19:49:16 +0000 (0:00:00.518) 0:13:03.688 ********* 2025-04-01 19:50:10.154614 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.154632 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154638 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.154643 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154648 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:50:10.154653 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154658 | orchestrator | 2025-04-01 19:50:10.154663 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:50:10.154668 | orchestrator | Tuesday 01 April 2025 19:49:16 +0000 (0:00:00.703) 0:13:04.392 ********* 2025-04-01 19:50:10.154673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.154678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.154683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.154687 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154693 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:50:10.154698 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:50:10.154702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:50:10.154707 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154712 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:50:10.154720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:50:10.154725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:50:10.154730 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154735 | orchestrator | 2025-04-01 19:50:10.154740 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-01 19:50:10.154745 | orchestrator | Tuesday 01 April 2025 19:49:17 +0000 (0:00:00.648) 0:13:05.041 ********* 2025-04-01 19:50:10.154750 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154755 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154760 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154765 | orchestrator | 2025-04-01 19:50:10.154769 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-01 19:50:10.154774 | orchestrator | Tuesday 01 April 2025 19:49:18 +0000 (0:00:00.865) 0:13:05.906 ********* 2025-04-01 19:50:10.154779 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.154784 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154789 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.154794 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154799 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.154804 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154809 | orchestrator | 2025-04-01 19:50:10.154814 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-01 19:50:10.154818 | orchestrator | Tuesday 01 April 2025 19:49:18 +0000 (0:00:00.613) 0:13:06.519 ********* 2025-04-01 19:50:10.154823 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154828 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154833 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154838 | orchestrator | 2025-04-01 19:50:10.154843 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-01 19:50:10.154850 | orchestrator | Tuesday 01 April 2025 19:49:19 +0000 (0:00:00.852) 0:13:07.372 ********* 2025-04-01 19:50:10.154855 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.154860 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.154865 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.154870 | orchestrator | 2025-04-01 19:50:10.154879 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-04-01 19:50:10.154884 | orchestrator | Tuesday 01 April 2025 19:49:20 +0000 (0:00:00.598) 0:13:07.970 ********* 2025-04-01 19:50:10.154889 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.154894 | orchestrator | 2025-04-01 19:50:10.154899 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-04-01 19:50:10.154904 | orchestrator | Tuesday 01 April 2025 19:49:21 +0000 (0:00:00.873) 0:13:08.843 ********* 2025-04-01 19:50:10.154909 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-04-01 19:50:10.154914 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-04-01 19:50:10.154919 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-04-01 19:50:10.154923 | orchestrator | 2025-04-01 19:50:10.154928 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-04-01 19:50:10.154933 | orchestrator | Tuesday 01 April 2025 19:49:21 +0000 (0:00:00.757) 0:13:09.601 ********* 2025-04-01 19:50:10.154938 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:50:10.154943 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.154948 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-01 19:50:10.154953 | orchestrator | 2025-04-01 19:50:10.154957 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-04-01 19:50:10.154962 | orchestrator | Tuesday 01 April 2025 19:49:24 +0000 (0:00:02.021) 0:13:11.622 ********* 2025-04-01 19:50:10.154967 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-01 19:50:10.154975 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-01 19:50:10.154980 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.154985 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-01 19:50:10.154990 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-01 19:50:10.154995 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155000 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-01 19:50:10.155004 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-01 19:50:10.155009 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155014 | orchestrator | 2025-04-01 19:50:10.155019 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-04-01 19:50:10.155024 | orchestrator | Tuesday 01 April 2025 19:49:25 +0000 (0:00:01.153) 0:13:12.775 ********* 2025-04-01 19:50:10.155029 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155034 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.155039 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.155044 | orchestrator | 2025-04-01 19:50:10.155049 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-04-01 19:50:10.155054 | orchestrator | Tuesday 01 April 2025 19:49:25 +0000 (0:00:00.701) 0:13:13.477 ********* 2025-04-01 19:50:10.155059 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155064 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.155069 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.155074 | orchestrator | 2025-04-01 19:50:10.155078 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-04-01 19:50:10.155084 | orchestrator | Tuesday 01 April 2025 19:49:26 +0000 (0:00:00.419) 0:13:13.897 ********* 2025-04-01 19:50:10.155088 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-04-01 19:50:10.155093 | orchestrator | 2025-04-01 19:50:10.155098 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-04-01 19:50:10.155103 | orchestrator | Tuesday 01 April 2025 19:49:26 +0000 (0:00:00.243) 0:13:14.140 ********* 2025-04-01 19:50:10.155108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155136 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155141 | orchestrator | 2025-04-01 19:50:10.155146 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-04-01 19:50:10.155151 | orchestrator | Tuesday 01 April 2025 19:49:27 +0000 (0:00:00.684) 0:13:14.824 ********* 2025-04-01 19:50:10.155156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155188 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155193 | orchestrator | 2025-04-01 19:50:10.155198 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-04-01 19:50:10.155203 | orchestrator | Tuesday 01 April 2025 19:49:28 +0000 (0:00:01.282) 0:13:16.107 ********* 2025-04-01 19:50:10.155208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-01 19:50:10.155233 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155238 | orchestrator | 2025-04-01 19:50:10.155243 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-04-01 19:50:10.155248 | orchestrator | Tuesday 01 April 2025 19:49:29 +0000 (0:00:00.732) 0:13:16.840 ********* 2025-04-01 19:50:10.155252 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-01 19:50:10.155258 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-01 19:50:10.155263 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-01 19:50:10.155268 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-01 19:50:10.155273 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-01 19:50:10.155277 | orchestrator | 2025-04-01 19:50:10.155282 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-04-01 19:50:10.155287 | orchestrator | Tuesday 01 April 2025 19:49:51 +0000 (0:00:22.741) 0:13:39.581 ********* 2025-04-01 19:50:10.155292 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155297 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.155302 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.155306 | orchestrator | 2025-04-01 19:50:10.155311 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-04-01 19:50:10.155316 | orchestrator | Tuesday 01 April 2025 19:49:52 +0000 (0:00:00.502) 0:13:40.083 ********* 2025-04-01 19:50:10.155321 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155326 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.155331 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.155336 | orchestrator | 2025-04-01 19:50:10.155341 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-04-01 19:50:10.155345 | orchestrator | Tuesday 01 April 2025 19:49:52 +0000 (0:00:00.349) 0:13:40.433 ********* 2025-04-01 19:50:10.155350 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.155355 | orchestrator | 2025-04-01 19:50:10.155360 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-04-01 19:50:10.155365 | orchestrator | Tuesday 01 April 2025 19:49:53 +0000 (0:00:00.561) 0:13:40.995 ********* 2025-04-01 19:50:10.155370 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.155378 | orchestrator | 2025-04-01 19:50:10.155386 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-04-01 19:50:10.155391 | orchestrator | Tuesday 01 April 2025 19:49:54 +0000 (0:00:00.834) 0:13:41.829 ********* 2025-04-01 19:50:10.155395 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.155400 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155405 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155410 | orchestrator | 2025-04-01 19:50:10.155415 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-04-01 19:50:10.155420 | orchestrator | Tuesday 01 April 2025 19:49:55 +0000 (0:00:01.118) 0:13:42.948 ********* 2025-04-01 19:50:10.155425 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.155430 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155434 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155439 | orchestrator | 2025-04-01 19:50:10.155444 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-04-01 19:50:10.155449 | orchestrator | Tuesday 01 April 2025 19:49:56 +0000 (0:00:01.049) 0:13:43.997 ********* 2025-04-01 19:50:10.155454 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.155459 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155464 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155468 | orchestrator | 2025-04-01 19:50:10.155475 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-04-01 19:50:10.155480 | orchestrator | Tuesday 01 April 2025 19:49:58 +0000 (0:00:01.965) 0:13:45.962 ********* 2025-04-01 19:50:10.155485 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.155490 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.155495 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-01 19:50:10.155500 | orchestrator | 2025-04-01 19:50:10.155505 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-04-01 19:50:10.155510 | orchestrator | Tuesday 01 April 2025 19:50:00 +0000 (0:00:01.811) 0:13:47.774 ********* 2025-04-01 19:50:10.155515 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155520 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:50:10.155525 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:50:10.155530 | orchestrator | 2025-04-01 19:50:10.155534 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-01 19:50:10.155539 | orchestrator | Tuesday 01 April 2025 19:50:01 +0000 (0:00:01.453) 0:13:49.227 ********* 2025-04-01 19:50:10.155544 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.155549 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155554 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155559 | orchestrator | 2025-04-01 19:50:10.155563 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-01 19:50:10.155568 | orchestrator | Tuesday 01 April 2025 19:50:02 +0000 (0:00:00.667) 0:13:49.895 ********* 2025-04-01 19:50:10.155573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:50:10.155578 | orchestrator | 2025-04-01 19:50:10.155583 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-01 19:50:10.155588 | orchestrator | Tuesday 01 April 2025 19:50:03 +0000 (0:00:00.901) 0:13:50.797 ********* 2025-04-01 19:50:10.155593 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.155598 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.155603 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.155608 | orchestrator | 2025-04-01 19:50:10.155612 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-01 19:50:10.155694 | orchestrator | Tuesday 01 April 2025 19:50:03 +0000 (0:00:00.408) 0:13:51.205 ********* 2025-04-01 19:50:10.155700 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.155705 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155710 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155715 | orchestrator | 2025-04-01 19:50:10.155720 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-01 19:50:10.155725 | orchestrator | Tuesday 01 April 2025 19:50:04 +0000 (0:00:01.201) 0:13:52.407 ********* 2025-04-01 19:50:10.155730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:50:10.155735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:50:10.155740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:50:10.155744 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:50:10.155749 | orchestrator | 2025-04-01 19:50:10.155754 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-01 19:50:10.155759 | orchestrator | Tuesday 01 April 2025 19:50:06 +0000 (0:00:01.356) 0:13:53.763 ********* 2025-04-01 19:50:10.155764 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:50:10.155769 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:50:10.155774 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:50:10.155779 | orchestrator | 2025-04-01 19:50:10.155784 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-01 19:50:10.155789 | orchestrator | Tuesday 01 April 2025 19:50:06 +0000 (0:00:00.353) 0:13:54.116 ********* 2025-04-01 19:50:10.155793 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:50:10.155798 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:50:10.155803 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:50:10.155808 | orchestrator | 2025-04-01 19:50:10.155813 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:50:10.155818 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-04-01 19:50:10.155823 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-04-01 19:50:10.155828 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-04-01 19:50:10.155833 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-04-01 19:50:10.155838 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-04-01 19:50:10.155843 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-04-01 19:50:10.155848 | orchestrator | 2025-04-01 19:50:10.155853 | orchestrator | 2025-04-01 19:50:10.155858 | orchestrator | 2025-04-01 19:50:10.155863 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:50:10.155868 | orchestrator | Tuesday 01 April 2025 19:50:07 +0000 (0:00:01.044) 0:13:55.161 ********* 2025-04-01 19:50:10.155875 | orchestrator | =============================================================================== 2025-04-01 19:50:13.143412 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 30.03s 2025-04-01 19:50:13.143558 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 29.41s 2025-04-01 19:50:13.143576 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 22.74s 2025-04-01 19:50:13.143590 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.50s 2025-04-01 19:50:13.143671 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.08s 2025-04-01 19:50:13.143687 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.24s 2025-04-01 19:50:13.143728 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.63s 2025-04-01 19:50:13.143742 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 9.00s 2025-04-01 19:50:13.143754 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 6.55s 2025-04-01 19:50:13.143767 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 6.52s 2025-04-01 19:50:13.143779 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.34s 2025-04-01 19:50:13.143791 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.22s 2025-04-01 19:50:13.143804 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.98s 2025-04-01 19:50:13.143816 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.83s 2025-04-01 19:50:13.143829 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.95s 2025-04-01 19:50:13.143841 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 4.55s 2025-04-01 19:50:13.143853 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.03s 2025-04-01 19:50:13.143866 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.89s 2025-04-01 19:50:13.143878 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 3.78s 2025-04-01 19:50:13.143891 | orchestrator | ceph-facts : find a running mon container ------------------------------- 3.57s 2025-04-01 19:50:13.143903 | orchestrator | 2025-04-01 19:50:10 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:13.143917 | orchestrator | 2025-04-01 19:50:10 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:13.143931 | orchestrator | 2025-04-01 19:50:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:13.143965 | orchestrator | 2025-04-01 19:50:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:13.144355 | orchestrator | 2025-04-01 19:50:13 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:13.146101 | orchestrator | 2025-04-01 19:50:13 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:16.200985 | orchestrator | 2025-04-01 19:50:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:16.201145 | orchestrator | 2025-04-01 19:50:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:16.201604 | orchestrator | 2025-04-01 19:50:16 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:16.205493 | orchestrator | 2025-04-01 19:50:16 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:19.250753 | orchestrator | 2025-04-01 19:50:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:19.250901 | orchestrator | 2025-04-01 19:50:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:19.253259 | orchestrator | 2025-04-01 19:50:19 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:19.255945 | orchestrator | 2025-04-01 19:50:19 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:22.302178 | orchestrator | 2025-04-01 19:50:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:22.302310 | orchestrator | 2025-04-01 19:50:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:22.308210 | orchestrator | 2025-04-01 19:50:22 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:22.308842 | orchestrator | 2025-04-01 19:50:22 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:25.358273 | orchestrator | 2025-04-01 19:50:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:25.358428 | orchestrator | 2025-04-01 19:50:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:25.359534 | orchestrator | 2025-04-01 19:50:25 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:25.361214 | orchestrator | 2025-04-01 19:50:25 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:25.362388 | orchestrator | 2025-04-01 19:50:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:28.414342 | orchestrator | 2025-04-01 19:50:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:28.416362 | orchestrator | 2025-04-01 19:50:28 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:28.421038 | orchestrator | 2025-04-01 19:50:28 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:31.470191 | orchestrator | 2025-04-01 19:50:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:31.470285 | orchestrator | 2025-04-01 19:50:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:31.470376 | orchestrator | 2025-04-01 19:50:31 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:31.471796 | orchestrator | 2025-04-01 19:50:31 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:34.521360 | orchestrator | 2025-04-01 19:50:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:34.521532 | orchestrator | 2025-04-01 19:50:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:34.524942 | orchestrator | 2025-04-01 19:50:34 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:34.526432 | orchestrator | 2025-04-01 19:50:34 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:37.574125 | orchestrator | 2025-04-01 19:50:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:37.574297 | orchestrator | 2025-04-01 19:50:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:40.629359 | orchestrator | 2025-04-01 19:50:37 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:40.629493 | orchestrator | 2025-04-01 19:50:37 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:40.629512 | orchestrator | 2025-04-01 19:50:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:40.629546 | orchestrator | 2025-04-01 19:50:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:40.630540 | orchestrator | 2025-04-01 19:50:40 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:40.634255 | orchestrator | 2025-04-01 19:50:40 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:43.679014 | orchestrator | 2025-04-01 19:50:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:43.679845 | orchestrator | 2025-04-01 19:50:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:43.681105 | orchestrator | 2025-04-01 19:50:43 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:43.681762 | orchestrator | 2025-04-01 19:50:43 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:46.735699 | orchestrator | 2025-04-01 19:50:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:46.735892 | orchestrator | 2025-04-01 19:50:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:46.736669 | orchestrator | 2025-04-01 19:50:46 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:46.739234 | orchestrator | 2025-04-01 19:50:46 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:49.785711 | orchestrator | 2025-04-01 19:50:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:49.785994 | orchestrator | 2025-04-01 19:50:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:49.787184 | orchestrator | 2025-04-01 19:50:49 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:49.787221 | orchestrator | 2025-04-01 19:50:49 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:52.835072 | orchestrator | 2025-04-01 19:50:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:52.835211 | orchestrator | 2025-04-01 19:50:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:52.835811 | orchestrator | 2025-04-01 19:50:52 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state STARTED 2025-04-01 19:50:52.837101 | orchestrator | 2025-04-01 19:50:52 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:55.887266 | orchestrator | 2025-04-01 19:50:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:55.887408 | orchestrator | 2025-04-01 19:50:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:55.889044 | orchestrator | 2025-04-01 19:50:55 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:50:55.893196 | orchestrator | 2025-04-01 19:50:55.895341 | orchestrator | 2025-04-01 19:50:55 | INFO  | Task 5bbef1e0-7d28-40a6-87e4-de2cc47eb989 is in state SUCCESS 2025-04-01 19:50:55.895412 | orchestrator | 2025-04-01 19:50:55.895429 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-04-01 19:50:55.895445 | orchestrator | 2025-04-01 19:50:55.895460 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-01 19:50:55.895474 | orchestrator | Tuesday 01 April 2025 19:47:19 +0000 (0:00:00.182) 0:00:00.182 ********* 2025-04-01 19:50:55.895489 | orchestrator | ok: [localhost] => { 2025-04-01 19:50:55.895504 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-04-01 19:50:55.895519 | orchestrator | } 2025-04-01 19:50:55.895533 | orchestrator | 2025-04-01 19:50:55.895548 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-04-01 19:50:55.895562 | orchestrator | Tuesday 01 April 2025 19:47:19 +0000 (0:00:00.048) 0:00:00.230 ********* 2025-04-01 19:50:55.895576 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-04-01 19:50:55.895592 | orchestrator | ...ignoring 2025-04-01 19:50:55.895606 | orchestrator | 2025-04-01 19:50:55.895620 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-04-01 19:50:55.895674 | orchestrator | Tuesday 01 April 2025 19:47:22 +0000 (0:00:02.666) 0:00:02.897 ********* 2025-04-01 19:50:55.895690 | orchestrator | skipping: [localhost] 2025-04-01 19:50:55.895704 | orchestrator | 2025-04-01 19:50:55.895719 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-04-01 19:50:55.895733 | orchestrator | Tuesday 01 April 2025 19:47:22 +0000 (0:00:00.074) 0:00:02.971 ********* 2025-04-01 19:50:55.895747 | orchestrator | ok: [localhost] 2025-04-01 19:50:55.895761 | orchestrator | 2025-04-01 19:50:55.895801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:50:55.895815 | orchestrator | 2025-04-01 19:50:55.895830 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:50:55.895844 | orchestrator | Tuesday 01 April 2025 19:47:22 +0000 (0:00:00.305) 0:00:03.277 ********* 2025-04-01 19:50:55.895859 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.895873 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.895887 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.895901 | orchestrator | 2025-04-01 19:50:55.895915 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:50:55.895929 | orchestrator | Tuesday 01 April 2025 19:47:22 +0000 (0:00:00.455) 0:00:03.733 ********* 2025-04-01 19:50:55.895943 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-01 19:50:55.895973 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-01 19:50:55.895987 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-01 19:50:55.896002 | orchestrator | 2025-04-01 19:50:55.896016 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-01 19:50:55.896030 | orchestrator | 2025-04-01 19:50:55.896044 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-01 19:50:55.896065 | orchestrator | Tuesday 01 April 2025 19:47:23 +0000 (0:00:00.498) 0:00:04.231 ********* 2025-04-01 19:50:55.896080 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:50:55.896094 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-01 19:50:55.896108 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-01 19:50:55.896122 | orchestrator | 2025-04-01 19:50:55.896136 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-01 19:50:55.896150 | orchestrator | Tuesday 01 April 2025 19:47:24 +0000 (0:00:00.753) 0:00:04.984 ********* 2025-04-01 19:50:55.896164 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:55.896179 | orchestrator | 2025-04-01 19:50:55.896193 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-04-01 19:50:55.896207 | orchestrator | Tuesday 01 April 2025 19:47:25 +0000 (0:00:00.900) 0:00:05.885 ********* 2025-04-01 19:50:55.896238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.896267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.896284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.896309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.896333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.896349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.896364 | orchestrator | 2025-04-01 19:50:55.896379 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-04-01 19:50:55.896393 | orchestrator | Tuesday 01 April 2025 19:47:29 +0000 (0:00:04.505) 0:00:10.390 ********* 2025-04-01 19:50:55.896407 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.896421 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.896435 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.896449 | orchestrator | 2025-04-01 19:50:55.896463 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-04-01 19:50:55.896477 | orchestrator | Tuesday 01 April 2025 19:47:30 +0000 (0:00:00.906) 0:00:11.297 ********* 2025-04-01 19:50:55.896491 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.896512 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.896526 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.896540 | orchestrator | 2025-04-01 19:50:55.896554 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-04-01 19:50:55.896720 | orchestrator | Tuesday 01 April 2025 19:47:32 +0000 (0:00:01.973) 0:00:13.271 ********* 2025-04-01 19:50:55.896748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.896775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.896791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.896822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.896838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.896853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.896868 | orchestrator | 2025-04-01 19:50:55.896882 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-04-01 19:50:55.896896 | orchestrator | Tuesday 01 April 2025 19:47:39 +0000 (0:00:06.649) 0:00:19.920 ********* 2025-04-01 19:50:55.896910 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.896924 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.896939 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.896952 | orchestrator | 2025-04-01 19:50:55.896967 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-04-01 19:50:55.896980 | orchestrator | Tuesday 01 April 2025 19:47:40 +0000 (0:00:01.263) 0:00:21.184 ********* 2025-04-01 19:50:55.896995 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.897009 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:55.897023 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:55.897037 | orchestrator | 2025-04-01 19:50:55.897051 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-04-01 19:50:55.897065 | orchestrator | Tuesday 01 April 2025 19:47:52 +0000 (0:00:12.155) 0:00:33.339 ********* 2025-04-01 19:50:55.897088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.897111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.897127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.897149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-01 19:50:55.897173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.897189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-01 19:50:55.897203 | orchestrator | 2025-04-01 19:50:55.897218 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-04-01 19:50:55.897232 | orchestrator | Tuesday 01 April 2025 19:47:57 +0000 (0:00:05.532) 0:00:38.872 ********* 2025-04-01 19:50:55.897246 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.897261 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:55.897275 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:55.897289 | orchestrator | 2025-04-01 19:50:55.897303 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-04-01 19:50:55.897317 | orchestrator | Tuesday 01 April 2025 19:47:59 +0000 (0:00:01.064) 0:00:39.937 ********* 2025-04-01 19:50:55.897333 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.897348 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.897364 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.897379 | orchestrator | 2025-04-01 19:50:55.897395 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-04-01 19:50:55.897410 | orchestrator | Tuesday 01 April 2025 19:47:59 +0000 (0:00:00.519) 0:00:40.456 ********* 2025-04-01 19:50:55.897425 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.897441 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.897456 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.897478 | orchestrator | 2025-04-01 19:50:55.897494 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-04-01 19:50:55.897509 | orchestrator | Tuesday 01 April 2025 19:48:00 +0000 (0:00:00.496) 0:00:40.953 ********* 2025-04-01 19:50:55.897526 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-04-01 19:50:55.897542 | orchestrator | ...ignoring 2025-04-01 19:50:55.897557 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-04-01 19:50:55.897573 | orchestrator | ...ignoring 2025-04-01 19:50:55.897589 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-04-01 19:50:55.897604 | orchestrator | ...ignoring 2025-04-01 19:50:55.897620 | orchestrator | 2025-04-01 19:50:55.897656 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-04-01 19:50:55.897673 | orchestrator | Tuesday 01 April 2025 19:48:11 +0000 (0:00:10.933) 0:00:51.887 ********* 2025-04-01 19:50:55.897688 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.897702 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.897716 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.897730 | orchestrator | 2025-04-01 19:50:55.897744 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-04-01 19:50:55.897758 | orchestrator | Tuesday 01 April 2025 19:48:11 +0000 (0:00:00.894) 0:00:52.782 ********* 2025-04-01 19:50:55.897772 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.897786 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.897801 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.897815 | orchestrator | 2025-04-01 19:50:55.897829 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-04-01 19:50:55.897843 | orchestrator | Tuesday 01 April 2025 19:48:12 +0000 (0:00:00.564) 0:00:53.346 ********* 2025-04-01 19:50:55.897857 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.897871 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.897885 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.897899 | orchestrator | 2025-04-01 19:50:55.897926 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-04-01 19:50:55.897941 | orchestrator | Tuesday 01 April 2025 19:48:12 +0000 (0:00:00.435) 0:00:53.782 ********* 2025-04-01 19:50:55.897969 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.897983 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.897998 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.898012 | orchestrator | 2025-04-01 19:50:55.898250 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-04-01 19:50:55.898270 | orchestrator | Tuesday 01 April 2025 19:48:13 +0000 (0:00:00.652) 0:00:54.434 ********* 2025-04-01 19:50:55.898285 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.898299 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.898313 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.898327 | orchestrator | 2025-04-01 19:50:55.898341 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-04-01 19:50:55.898356 | orchestrator | Tuesday 01 April 2025 19:48:14 +0000 (0:00:00.655) 0:00:55.091 ********* 2025-04-01 19:50:55.898370 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.898383 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.898397 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.898411 | orchestrator | 2025-04-01 19:50:55.898425 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-01 19:50:55.898439 | orchestrator | Tuesday 01 April 2025 19:48:15 +0000 (0:00:01.013) 0:00:56.104 ********* 2025-04-01 19:50:55.898453 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.898467 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.898481 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-04-01 19:50:55.898505 | orchestrator | 2025-04-01 19:50:55.898519 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-04-01 19:50:55.898533 | orchestrator | Tuesday 01 April 2025 19:48:16 +0000 (0:00:00.816) 0:00:56.921 ********* 2025-04-01 19:50:55.898546 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.898560 | orchestrator | 2025-04-01 19:50:55.898574 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-04-01 19:50:55.898588 | orchestrator | Tuesday 01 April 2025 19:48:26 +0000 (0:00:10.707) 0:01:07.629 ********* 2025-04-01 19:50:55.898602 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.898616 | orchestrator | 2025-04-01 19:50:55.898630 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-01 19:50:55.898965 | orchestrator | Tuesday 01 April 2025 19:48:26 +0000 (0:00:00.137) 0:01:07.767 ********* 2025-04-01 19:50:55.899009 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.899025 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.899038 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.899051 | orchestrator | 2025-04-01 19:50:55.899064 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-04-01 19:50:55.899077 | orchestrator | Tuesday 01 April 2025 19:48:28 +0000 (0:00:01.138) 0:01:08.905 ********* 2025-04-01 19:50:55.899090 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.899103 | orchestrator | 2025-04-01 19:50:55.899115 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-04-01 19:50:55.899129 | orchestrator | Tuesday 01 April 2025 19:48:37 +0000 (0:00:09.621) 0:01:18.526 ********* 2025-04-01 19:50:55.899141 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.899155 | orchestrator | 2025-04-01 19:50:55.899168 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-04-01 19:50:55.899180 | orchestrator | Tuesday 01 April 2025 19:48:39 +0000 (0:00:01.533) 0:01:20.060 ********* 2025-04-01 19:50:55.899193 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.899206 | orchestrator | 2025-04-01 19:50:55.899218 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-04-01 19:50:55.899231 | orchestrator | Tuesday 01 April 2025 19:48:41 +0000 (0:00:02.800) 0:01:22.860 ********* 2025-04-01 19:50:55.899244 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.899256 | orchestrator | 2025-04-01 19:50:55.899269 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-04-01 19:50:55.899281 | orchestrator | Tuesday 01 April 2025 19:48:42 +0000 (0:00:00.120) 0:01:22.981 ********* 2025-04-01 19:50:55.899294 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.899306 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.899318 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.899331 | orchestrator | 2025-04-01 19:50:55.899343 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-04-01 19:50:55.899355 | orchestrator | Tuesday 01 April 2025 19:48:42 +0000 (0:00:00.496) 0:01:23.477 ********* 2025-04-01 19:50:55.899368 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.899380 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:55.899420 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:55.899433 | orchestrator | 2025-04-01 19:50:55.899445 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-04-01 19:50:55.899457 | orchestrator | Tuesday 01 April 2025 19:48:43 +0000 (0:00:00.559) 0:01:24.037 ********* 2025-04-01 19:50:55.899470 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-01 19:50:55.899483 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:55.899495 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.899507 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:55.899520 | orchestrator | 2025-04-01 19:50:55.899532 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-01 19:50:55.899544 | orchestrator | skipping: no hosts matched 2025-04-01 19:50:55.899593 | orchestrator | 2025-04-01 19:50:55.899606 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-01 19:50:55.899618 | orchestrator | 2025-04-01 19:50:55.899667 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-01 19:50:55.899682 | orchestrator | Tuesday 01 April 2025 19:48:58 +0000 (0:00:15.706) 0:01:39.744 ********* 2025-04-01 19:50:55.899695 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:50:55.899708 | orchestrator | 2025-04-01 19:50:55.899720 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-01 19:50:55.899732 | orchestrator | Tuesday 01 April 2025 19:49:15 +0000 (0:00:16.185) 0:01:55.930 ********* 2025-04-01 19:50:55.899775 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.899789 | orchestrator | 2025-04-01 19:50:55.899802 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-01 19:50:55.899814 | orchestrator | Tuesday 01 April 2025 19:49:34 +0000 (0:00:19.569) 0:02:15.500 ********* 2025-04-01 19:50:55.899827 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.899839 | orchestrator | 2025-04-01 19:50:55.899852 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-01 19:50:55.899864 | orchestrator | 2025-04-01 19:50:55.899877 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-01 19:50:55.899889 | orchestrator | Tuesday 01 April 2025 19:49:37 +0000 (0:00:02.829) 0:02:18.329 ********* 2025-04-01 19:50:55.899901 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:50:55.899914 | orchestrator | 2025-04-01 19:50:55.899926 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-01 19:50:55.899939 | orchestrator | Tuesday 01 April 2025 19:49:53 +0000 (0:00:16.355) 0:02:34.684 ********* 2025-04-01 19:50:55.899951 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.899964 | orchestrator | 2025-04-01 19:50:55.899976 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-01 19:50:55.899988 | orchestrator | Tuesday 01 April 2025 19:50:13 +0000 (0:00:19.546) 0:02:54.230 ********* 2025-04-01 19:50:55.900001 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.900013 | orchestrator | 2025-04-01 19:50:55.900026 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-01 19:50:55.900038 | orchestrator | 2025-04-01 19:50:55.900050 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-01 19:50:55.900062 | orchestrator | Tuesday 01 April 2025 19:50:16 +0000 (0:00:02.746) 0:02:56.977 ********* 2025-04-01 19:50:55.900075 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.900087 | orchestrator | 2025-04-01 19:50:55.900099 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-01 19:50:55.900112 | orchestrator | Tuesday 01 April 2025 19:50:30 +0000 (0:00:14.311) 0:03:11.288 ********* 2025-04-01 19:50:55.900124 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.900137 | orchestrator | 2025-04-01 19:50:55.900149 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-01 19:50:55.900162 | orchestrator | Tuesday 01 April 2025 19:50:34 +0000 (0:00:03.692) 0:03:14.981 ********* 2025-04-01 19:50:55.900174 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.900187 | orchestrator | 2025-04-01 19:50:55.900199 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-01 19:50:55.900212 | orchestrator | 2025-04-01 19:50:55.900224 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-01 19:50:55.900237 | orchestrator | Tuesday 01 April 2025 19:50:36 +0000 (0:00:02.799) 0:03:17.780 ********* 2025-04-01 19:50:55.900249 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:50:55.900262 | orchestrator | 2025-04-01 19:50:55.900274 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-04-01 19:50:55.900286 | orchestrator | Tuesday 01 April 2025 19:50:37 +0000 (0:00:00.844) 0:03:18.625 ********* 2025-04-01 19:50:55.900299 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.900311 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.900332 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.900344 | orchestrator | 2025-04-01 19:50:55.900357 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-04-01 19:50:55.900369 | orchestrator | Tuesday 01 April 2025 19:50:40 +0000 (0:00:02.871) 0:03:21.497 ********* 2025-04-01 19:50:55.900381 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.900394 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.900407 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.900419 | orchestrator | 2025-04-01 19:50:55.900431 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-04-01 19:50:55.900444 | orchestrator | Tuesday 01 April 2025 19:50:43 +0000 (0:00:02.745) 0:03:24.242 ********* 2025-04-01 19:50:55.900456 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.900469 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.900481 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.900494 | orchestrator | 2025-04-01 19:50:55.900506 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-04-01 19:50:55.900519 | orchestrator | Tuesday 01 April 2025 19:50:45 +0000 (0:00:02.390) 0:03:26.633 ********* 2025-04-01 19:50:55.900531 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.900543 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.900556 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:50:55.900568 | orchestrator | 2025-04-01 19:50:55.900586 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-04-01 19:50:55.900599 | orchestrator | Tuesday 01 April 2025 19:50:47 +0000 (0:00:02.077) 0:03:28.710 ********* 2025-04-01 19:50:55.900611 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:50:55.900623 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:50:55.900658 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:50:55.900672 | orchestrator | 2025-04-01 19:50:55.900684 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-01 19:50:55.900697 | orchestrator | Tuesday 01 April 2025 19:50:51 +0000 (0:00:04.161) 0:03:32.872 ********* 2025-04-01 19:50:55.900709 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:50:55.900722 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:50:55.900734 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:50:55.900746 | orchestrator | 2025-04-01 19:50:55.900759 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:50:55.900772 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-01 19:50:55.900786 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-04-01 19:50:55.900807 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-01 19:50:58.946442 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-01 19:50:58.946552 | orchestrator | 2025-04-01 19:50:58.946570 | orchestrator | 2025-04-01 19:50:58.946586 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:50:58.946602 | orchestrator | Tuesday 01 April 2025 19:50:52 +0000 (0:00:00.492) 0:03:33.364 ********* 2025-04-01 19:50:58.946616 | orchestrator | =============================================================================== 2025-04-01 19:50:58.946631 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 39.12s 2025-04-01 19:50:58.946695 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.54s 2025-04-01 19:50:58.946710 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 15.71s 2025-04-01 19:50:58.946724 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.31s 2025-04-01 19:50:58.946739 | orchestrator | mariadb : Copying over galera.cnf -------------------------------------- 12.16s 2025-04-01 19:50:58.946780 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2025-04-01 19:50:58.946795 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.71s 2025-04-01 19:50:58.946809 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.62s 2025-04-01 19:50:58.946823 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.65s 2025-04-01 19:50:58.946837 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.58s 2025-04-01 19:50:58.946851 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 5.53s 2025-04-01 19:50:58.946865 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.50s 2025-04-01 19:50:58.946879 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.16s 2025-04-01 19:50:58.946893 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 3.69s 2025-04-01 19:50:58.946908 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.87s 2025-04-01 19:50:58.946922 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.80s 2025-04-01 19:50:58.946936 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2025-04-01 19:50:58.946950 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.75s 2025-04-01 19:50:58.946964 | orchestrator | Check MariaDB service --------------------------------------------------- 2.67s 2025-04-01 19:50:58.946980 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.39s 2025-04-01 19:50:58.946996 | orchestrator | 2025-04-01 19:50:55 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:50:58.947012 | orchestrator | 2025-04-01 19:50:55 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:58.947027 | orchestrator | 2025-04-01 19:50:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:50:58.947059 | orchestrator | 2025-04-01 19:50:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:50:58.947729 | orchestrator | 2025-04-01 19:50:58 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:50:58.950376 | orchestrator | 2025-04-01 19:50:58 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:50:58.951953 | orchestrator | 2025-04-01 19:50:58 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:50:58.952842 | orchestrator | 2025-04-01 19:50:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:02.023039 | orchestrator | 2025-04-01 19:51:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:02.026115 | orchestrator | 2025-04-01 19:51:02 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:02.027863 | orchestrator | 2025-04-01 19:51:02 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:02.029830 | orchestrator | 2025-04-01 19:51:02 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:05.077331 | orchestrator | 2025-04-01 19:51:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:05.077453 | orchestrator | 2025-04-01 19:51:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:05.080825 | orchestrator | 2025-04-01 19:51:05 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:05.083216 | orchestrator | 2025-04-01 19:51:05 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:05.085528 | orchestrator | 2025-04-01 19:51:05 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:05.085763 | orchestrator | 2025-04-01 19:51:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:08.131396 | orchestrator | 2025-04-01 19:51:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:08.135511 | orchestrator | 2025-04-01 19:51:08 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:08.138086 | orchestrator | 2025-04-01 19:51:08 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:08.140302 | orchestrator | 2025-04-01 19:51:08 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:08.140949 | orchestrator | 2025-04-01 19:51:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:11.188584 | orchestrator | 2025-04-01 19:51:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:11.189821 | orchestrator | 2025-04-01 19:51:11 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:11.191841 | orchestrator | 2025-04-01 19:51:11 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:11.193244 | orchestrator | 2025-04-01 19:51:11 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:14.247885 | orchestrator | 2025-04-01 19:51:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:14.248032 | orchestrator | 2025-04-01 19:51:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:14.249419 | orchestrator | 2025-04-01 19:51:14 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:14.252004 | orchestrator | 2025-04-01 19:51:14 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:14.255856 | orchestrator | 2025-04-01 19:51:14 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:14.256355 | orchestrator | 2025-04-01 19:51:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:17.302769 | orchestrator | 2025-04-01 19:51:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:17.303125 | orchestrator | 2025-04-01 19:51:17 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:17.303760 | orchestrator | 2025-04-01 19:51:17 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:17.305061 | orchestrator | 2025-04-01 19:51:17 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:20.344356 | orchestrator | 2025-04-01 19:51:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:20.344527 | orchestrator | 2025-04-01 19:51:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:20.346585 | orchestrator | 2025-04-01 19:51:20 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:20.347332 | orchestrator | 2025-04-01 19:51:20 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:20.348615 | orchestrator | 2025-04-01 19:51:20 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:20.348755 | orchestrator | 2025-04-01 19:51:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:23.395595 | orchestrator | 2025-04-01 19:51:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:23.396999 | orchestrator | 2025-04-01 19:51:23 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:23.398904 | orchestrator | 2025-04-01 19:51:23 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:23.400385 | orchestrator | 2025-04-01 19:51:23 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:26.451950 | orchestrator | 2025-04-01 19:51:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:26.452107 | orchestrator | 2025-04-01 19:51:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:26.454360 | orchestrator | 2025-04-01 19:51:26 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:26.454395 | orchestrator | 2025-04-01 19:51:26 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:26.462591 | orchestrator | 2025-04-01 19:51:26 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:29.517336 | orchestrator | 2025-04-01 19:51:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:29.517504 | orchestrator | 2025-04-01 19:51:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:29.519258 | orchestrator | 2025-04-01 19:51:29 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:29.520670 | orchestrator | 2025-04-01 19:51:29 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:29.521287 | orchestrator | 2025-04-01 19:51:29 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:29.521472 | orchestrator | 2025-04-01 19:51:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:32.563102 | orchestrator | 2025-04-01 19:51:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:32.564187 | orchestrator | 2025-04-01 19:51:32 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:32.564230 | orchestrator | 2025-04-01 19:51:32 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:32.564790 | orchestrator | 2025-04-01 19:51:32 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:35.615705 | orchestrator | 2025-04-01 19:51:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:35.615877 | orchestrator | 2025-04-01 19:51:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:35.616268 | orchestrator | 2025-04-01 19:51:35 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:35.616304 | orchestrator | 2025-04-01 19:51:35 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:35.617160 | orchestrator | 2025-04-01 19:51:35 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:38.655286 | orchestrator | 2025-04-01 19:51:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:38.655411 | orchestrator | 2025-04-01 19:51:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:38.656527 | orchestrator | 2025-04-01 19:51:38 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:38.657632 | orchestrator | 2025-04-01 19:51:38 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:38.658728 | orchestrator | 2025-04-01 19:51:38 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:41.722225 | orchestrator | 2025-04-01 19:51:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:41.722374 | orchestrator | 2025-04-01 19:51:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:41.723560 | orchestrator | 2025-04-01 19:51:41 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:41.726154 | orchestrator | 2025-04-01 19:51:41 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:41.727319 | orchestrator | 2025-04-01 19:51:41 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:41.727357 | orchestrator | 2025-04-01 19:51:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:44.790006 | orchestrator | 2025-04-01 19:51:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:44.791181 | orchestrator | 2025-04-01 19:51:44 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:44.795552 | orchestrator | 2025-04-01 19:51:44 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:44.797568 | orchestrator | 2025-04-01 19:51:44 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:47.846929 | orchestrator | 2025-04-01 19:51:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:47.847068 | orchestrator | 2025-04-01 19:51:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:47.848960 | orchestrator | 2025-04-01 19:51:47 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:47.851636 | orchestrator | 2025-04-01 19:51:47 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:47.853329 | orchestrator | 2025-04-01 19:51:47 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:50.897498 | orchestrator | 2025-04-01 19:51:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:50.897630 | orchestrator | 2025-04-01 19:51:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:50.901150 | orchestrator | 2025-04-01 19:51:50 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:50.902890 | orchestrator | 2025-04-01 19:51:50 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:50.904888 | orchestrator | 2025-04-01 19:51:50 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:53.954981 | orchestrator | 2025-04-01 19:51:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:53.955102 | orchestrator | 2025-04-01 19:51:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:53.958367 | orchestrator | 2025-04-01 19:51:53 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:53.959962 | orchestrator | 2025-04-01 19:51:53 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:53.961579 | orchestrator | 2025-04-01 19:51:53 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:51:57.012435 | orchestrator | 2025-04-01 19:51:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:51:57.012580 | orchestrator | 2025-04-01 19:51:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:51:57.014808 | orchestrator | 2025-04-01 19:51:57 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:51:57.014845 | orchestrator | 2025-04-01 19:51:57 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:51:57.017884 | orchestrator | 2025-04-01 19:51:57 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:00.059072 | orchestrator | 2025-04-01 19:51:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:00.059210 | orchestrator | 2025-04-01 19:52:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:00.061954 | orchestrator | 2025-04-01 19:52:00 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:00.064337 | orchestrator | 2025-04-01 19:52:00 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:00.066620 | orchestrator | 2025-04-01 19:52:00 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:03.121159 | orchestrator | 2025-04-01 19:52:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:03.121293 | orchestrator | 2025-04-01 19:52:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:03.121866 | orchestrator | 2025-04-01 19:52:03 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:03.125561 | orchestrator | 2025-04-01 19:52:03 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:03.127700 | orchestrator | 2025-04-01 19:52:03 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:06.185910 | orchestrator | 2025-04-01 19:52:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:06.186076 | orchestrator | 2025-04-01 19:52:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:06.187236 | orchestrator | 2025-04-01 19:52:06 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:06.188762 | orchestrator | 2025-04-01 19:52:06 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:06.190155 | orchestrator | 2025-04-01 19:52:06 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:09.245100 | orchestrator | 2025-04-01 19:52:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:09.245222 | orchestrator | 2025-04-01 19:52:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:09.246108 | orchestrator | 2025-04-01 19:52:09 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:09.247530 | orchestrator | 2025-04-01 19:52:09 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:09.249741 | orchestrator | 2025-04-01 19:52:09 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:12.298867 | orchestrator | 2025-04-01 19:52:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:12.299011 | orchestrator | 2025-04-01 19:52:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:12.301978 | orchestrator | 2025-04-01 19:52:12 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:12.302971 | orchestrator | 2025-04-01 19:52:12 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:12.304784 | orchestrator | 2025-04-01 19:52:12 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:15.352960 | orchestrator | 2025-04-01 19:52:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:15.353096 | orchestrator | 2025-04-01 19:52:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:15.354752 | orchestrator | 2025-04-01 19:52:15 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:15.357009 | orchestrator | 2025-04-01 19:52:15 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:15.358737 | orchestrator | 2025-04-01 19:52:15 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state STARTED 2025-04-01 19:52:18.419496 | orchestrator | 2025-04-01 19:52:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:18.419634 | orchestrator | 2025-04-01 19:52:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:18.420720 | orchestrator | 2025-04-01 19:52:18 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:18.420755 | orchestrator | 2025-04-01 19:52:18 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:18.420778 | orchestrator | 2025-04-01 19:52:18 | INFO  | Task 2636b953-25af-4b74-ba37-a3775c497917 is in state SUCCESS 2025-04-01 19:52:18.421812 | orchestrator | 2025-04-01 19:52:18.421846 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-01 19:52:18.421861 | orchestrator | 2025-04-01 19:52:18.421876 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-04-01 19:52:18.421890 | orchestrator | 2025-04-01 19:52:18.421965 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-01 19:52:18.421985 | orchestrator | Tuesday 01 April 2025 19:50:13 +0000 (0:00:01.309) 0:00:01.309 ********* 2025-04-01 19:52:18.422000 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:52:18.422062 | orchestrator | 2025-04-01 19:52:18.422437 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-01 19:52:18.422465 | orchestrator | Tuesday 01 April 2025 19:50:13 +0000 (0:00:00.548) 0:00:01.858 ********* 2025-04-01 19:52:18.422480 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-01 19:52:18.422495 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-01 19:52:18.422509 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-01 19:52:18.422523 | orchestrator | 2025-04-01 19:52:18.422538 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-01 19:52:18.422552 | orchestrator | Tuesday 01 April 2025 19:50:14 +0000 (0:00:00.912) 0:00:02.771 ********* 2025-04-01 19:52:18.422566 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:52:18.422581 | orchestrator | 2025-04-01 19:52:18.422595 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-01 19:52:18.422609 | orchestrator | Tuesday 01 April 2025 19:50:15 +0000 (0:00:00.782) 0:00:03.553 ********* 2025-04-01 19:52:18.422623 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.422638 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.422675 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.422691 | orchestrator | 2025-04-01 19:52:18.422706 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-01 19:52:18.422720 | orchestrator | Tuesday 01 April 2025 19:50:16 +0000 (0:00:00.590) 0:00:04.144 ********* 2025-04-01 19:52:18.422734 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.422748 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.422762 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.422776 | orchestrator | 2025-04-01 19:52:18.422790 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-01 19:52:18.422804 | orchestrator | Tuesday 01 April 2025 19:50:16 +0000 (0:00:00.403) 0:00:04.548 ********* 2025-04-01 19:52:18.422817 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.422831 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.422845 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.422859 | orchestrator | 2025-04-01 19:52:18.422873 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-01 19:52:18.422887 | orchestrator | Tuesday 01 April 2025 19:50:17 +0000 (0:00:00.834) 0:00:05.383 ********* 2025-04-01 19:52:18.422926 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.422941 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.422955 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.422969 | orchestrator | 2025-04-01 19:52:18.422983 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-01 19:52:18.422997 | orchestrator | Tuesday 01 April 2025 19:50:17 +0000 (0:00:00.367) 0:00:05.751 ********* 2025-04-01 19:52:18.423011 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.423025 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.423039 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.423054 | orchestrator | 2025-04-01 19:52:18.423070 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-01 19:52:18.423086 | orchestrator | Tuesday 01 April 2025 19:50:18 +0000 (0:00:00.379) 0:00:06.131 ********* 2025-04-01 19:52:18.423101 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.423116 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.423132 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.423147 | orchestrator | 2025-04-01 19:52:18.423163 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-01 19:52:18.423178 | orchestrator | Tuesday 01 April 2025 19:50:18 +0000 (0:00:00.368) 0:00:06.500 ********* 2025-04-01 19:52:18.423194 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.423218 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.423233 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.423249 | orchestrator | 2025-04-01 19:52:18.423265 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-01 19:52:18.423280 | orchestrator | Tuesday 01 April 2025 19:50:19 +0000 (0:00:00.535) 0:00:07.035 ********* 2025-04-01 19:52:18.423296 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.423312 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.423327 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.423343 | orchestrator | 2025-04-01 19:52:18.423358 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-01 19:52:18.423374 | orchestrator | Tuesday 01 April 2025 19:50:19 +0000 (0:00:00.311) 0:00:07.347 ********* 2025-04-01 19:52:18.423390 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-01 19:52:18.423404 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:52:18.423418 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:52:18.423432 | orchestrator | 2025-04-01 19:52:18.423446 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-01 19:52:18.423460 | orchestrator | Tuesday 01 April 2025 19:50:20 +0000 (0:00:00.734) 0:00:08.082 ********* 2025-04-01 19:52:18.423474 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.423488 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.423502 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.423516 | orchestrator | 2025-04-01 19:52:18.423530 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-01 19:52:18.423545 | orchestrator | Tuesday 01 April 2025 19:50:20 +0000 (0:00:00.457) 0:00:08.539 ********* 2025-04-01 19:52:18.423568 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-01 19:52:18.423583 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:52:18.423603 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:52:18.423618 | orchestrator | 2025-04-01 19:52:18.423632 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-01 19:52:18.423646 | orchestrator | Tuesday 01 April 2025 19:50:23 +0000 (0:00:02.662) 0:00:11.202 ********* 2025-04-01 19:52:18.423688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:52:18.423703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:52:18.423726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:52:18.423741 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.423755 | orchestrator | 2025-04-01 19:52:18.423769 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-01 19:52:18.423783 | orchestrator | Tuesday 01 April 2025 19:50:23 +0000 (0:00:00.506) 0:00:11.709 ********* 2025-04-01 19:52:18.423799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-01 19:52:18.423816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-01 19:52:18.423830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-01 19:52:18.423844 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.423859 | orchestrator | 2025-04-01 19:52:18.423873 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-01 19:52:18.423887 | orchestrator | Tuesday 01 April 2025 19:50:24 +0000 (0:00:00.731) 0:00:12.441 ********* 2025-04-01 19:52:18.423902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:52:18.423918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:52:18.423932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:52:18.423947 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.423961 | orchestrator | 2025-04-01 19:52:18.423975 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-01 19:52:18.423989 | orchestrator | Tuesday 01 April 2025 19:50:24 +0000 (0:00:00.180) 0:00:12.622 ********* 2025-04-01 19:52:18.424006 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '33d7feb55f5d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-01 19:50:21.463814', 'end': '2025-04-01 19:50:21.494964', 'delta': '0:00:00.031150', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33d7feb55f5d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-01 19:52:18.424036 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '528c834bfea5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-01 19:50:22.128010', 'end': '2025-04-01 19:50:22.161363', 'delta': '0:00:00.033353', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['528c834bfea5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-01 19:52:18.424059 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'a194d25c79cd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-01 19:50:22.818900', 'end': '2025-04-01 19:50:22.858410', 'delta': '0:00:00.039510', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a194d25c79cd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-01 19:52:18.424074 | orchestrator | 2025-04-01 19:52:18.424088 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-01 19:52:18.424102 | orchestrator | Tuesday 01 April 2025 19:50:24 +0000 (0:00:00.226) 0:00:12.848 ********* 2025-04-01 19:52:18.424117 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.424131 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.424145 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.424159 | orchestrator | 2025-04-01 19:52:18.424174 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-01 19:52:18.424188 | orchestrator | Tuesday 01 April 2025 19:50:25 +0000 (0:00:00.506) 0:00:13.355 ********* 2025-04-01 19:52:18.424202 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-01 19:52:18.424216 | orchestrator | 2025-04-01 19:52:18.424230 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-01 19:52:18.424244 | orchestrator | Tuesday 01 April 2025 19:50:26 +0000 (0:00:01.277) 0:00:14.633 ********* 2025-04-01 19:52:18.424258 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424272 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424286 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424300 | orchestrator | 2025-04-01 19:52:18.424314 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-01 19:52:18.424328 | orchestrator | Tuesday 01 April 2025 19:50:27 +0000 (0:00:00.566) 0:00:15.199 ********* 2025-04-01 19:52:18.424342 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424356 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424370 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424385 | orchestrator | 2025-04-01 19:52:18.424399 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-01 19:52:18.424413 | orchestrator | Tuesday 01 April 2025 19:50:27 +0000 (0:00:00.502) 0:00:15.702 ********* 2025-04-01 19:52:18.424426 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424441 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424455 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424469 | orchestrator | 2025-04-01 19:52:18.424483 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-01 19:52:18.424497 | orchestrator | Tuesday 01 April 2025 19:50:28 +0000 (0:00:00.308) 0:00:16.010 ********* 2025-04-01 19:52:18.424511 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.424525 | orchestrator | 2025-04-01 19:52:18.424539 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-01 19:52:18.424553 | orchestrator | Tuesday 01 April 2025 19:50:28 +0000 (0:00:00.147) 0:00:16.157 ********* 2025-04-01 19:52:18.424573 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424588 | orchestrator | 2025-04-01 19:52:18.424602 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-01 19:52:18.424616 | orchestrator | Tuesday 01 April 2025 19:50:28 +0000 (0:00:00.275) 0:00:16.433 ********* 2025-04-01 19:52:18.424630 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424644 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424711 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424726 | orchestrator | 2025-04-01 19:52:18.424741 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-01 19:52:18.424755 | orchestrator | Tuesday 01 April 2025 19:50:28 +0000 (0:00:00.513) 0:00:16.947 ********* 2025-04-01 19:52:18.424768 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424783 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424797 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424811 | orchestrator | 2025-04-01 19:52:18.424825 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-01 19:52:18.424839 | orchestrator | Tuesday 01 April 2025 19:50:29 +0000 (0:00:00.365) 0:00:17.312 ********* 2025-04-01 19:52:18.424853 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424867 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424881 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424895 | orchestrator | 2025-04-01 19:52:18.424909 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-01 19:52:18.424923 | orchestrator | Tuesday 01 April 2025 19:50:29 +0000 (0:00:00.334) 0:00:17.647 ********* 2025-04-01 19:52:18.424937 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.424951 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.424972 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.424987 | orchestrator | 2025-04-01 19:52:18.425001 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-01 19:52:18.425020 | orchestrator | Tuesday 01 April 2025 19:50:30 +0000 (0:00:00.378) 0:00:18.026 ********* 2025-04-01 19:52:18.425035 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.425049 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.425063 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.425078 | orchestrator | 2025-04-01 19:52:18.425092 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-01 19:52:18.425106 | orchestrator | Tuesday 01 April 2025 19:50:30 +0000 (0:00:00.596) 0:00:18.622 ********* 2025-04-01 19:52:18.425120 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.425134 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.425147 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.425160 | orchestrator | 2025-04-01 19:52:18.425172 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-01 19:52:18.425185 | orchestrator | Tuesday 01 April 2025 19:50:31 +0000 (0:00:00.384) 0:00:19.006 ********* 2025-04-01 19:52:18.425197 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.425209 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.425222 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.425234 | orchestrator | 2025-04-01 19:52:18.425247 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-01 19:52:18.425259 | orchestrator | Tuesday 01 April 2025 19:50:31 +0000 (0:00:00.369) 0:00:19.375 ********* 2025-04-01 19:52:18.425273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--bdd573d7--384a--5f49--8a42--9b210b6d8834-osd--block--bdd573d7--384a--5f49--8a42--9b210b6d8834', 'dm-uuid-LVM-0HLscGhWI3BE3z58va0GpTBTPtoWQT6fdFHxJHx3khHHsXbjxB45Uwc2cTbz8X74'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988d16a2--b35c--5840--9d7c--a8265d6d87f9-osd--block--988d16a2--b35c--5840--9d7c--a8265d6d87f9', 'dm-uuid-LVM-eDTev9OoC6b2zQ9jQohOhBnledSwj0ogaUvEhqgmKpBVoPinnkxjkzB5dSC7OO03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52229b2b--1fb5--50ba--ad18--deadbd92af76-osd--block--52229b2b--1fb5--50ba--ad18--deadbd92af76', 'dm-uuid-LVM-ZeWxycIrl6OP9tRFrVmx3b5VdT7GwK6d1hscJ5sC2ehnymNJfBMNwGuLf1L9gQRY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b9675d24--a7d4--5c32--a36a--48aa524d4563-osd--block--b9675d24--a7d4--5c32--a36a--48aa524d4563', 'dm-uuid-LVM-UiMcCVQwJh1DUyUT83GdqDyaEOZCtj3YIGFAhIIKH3ULf6b5KAFZNugOABiaJArg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9f2a8a05-1f0e-4612-894f-941da9ace46e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--bdd573d7--384a--5f49--8a42--9b210b6d8834-osd--block--bdd573d7--384a--5f49--8a42--9b210b6d8834'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-joOaWb-DLqj-TzGF-jHyS-iq9J-Ab0E-X8GYc8', 'scsi-0QEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1', 'scsi-SQEMU_QEMU_HARDDISK_19d966df-ef2b-4cdf-8cd3-e53e17cf39c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--988d16a2--b35c--5840--9d7c--a8265d6d87f9-osd--block--988d16a2--b35c--5840--9d7c--a8265d6d87f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Eq9Cfd-RShp-tSW2-6REe-F5fP-szKw-3dyL23', 'scsi-0QEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72', 'scsi-SQEMU_QEMU_HARDDISK_063ac280-b641-4001-8d36-5300696e4f72'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03', 'scsi-SQEMU_QEMU_HARDDISK_dd1fb40f-182f-4a6f-a5ec-ee8bbc345c03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part1', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part14', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part15', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part16', 'scsi-SQEMU_QEMU_HARDDISK_ae18c0ec-da2f-45ed-b23b-40c75813e891-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425723 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.425744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--52229b2b--1fb5--50ba--ad18--deadbd92af76-osd--block--52229b2b--1fb5--50ba--ad18--deadbd92af76'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WI5n3P-kwt6-sZBw-bMZg-KnjK-Px49-iD8yT6', 'scsi-0QEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c', 'scsi-SQEMU_QEMU_HARDDISK_5fefcc5b-05b8-4046-aae3-ed6d9b3b967c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b9675d24--a7d4--5c32--a36a--48aa524d4563-osd--block--b9675d24--a7d4--5c32--a36a--48aa524d4563'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0fq0AM-H2Ux-0173-mqqu-6LKu-Bsgu-eiVs3w', 'scsi-0QEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c', 'scsi-SQEMU_QEMU_HARDDISK_351e2311-cc99-4b1d-b7f8-98ba0727423c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905', 'scsi-SQEMU_QEMU_HARDDISK_f219ed29-ae42-40c1-a413-2af7dcf44905'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425797 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.425815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--959a80fb--1de6--50df--b35c--a247ba0dd9c7-osd--block--959a80fb--1de6--50df--b35c--a247ba0dd9c7', 'dm-uuid-LVM-V3SHFiLLYnCvanXpqDvqxOQH9zNG7t2501L1tIO6yDkizNtXxUkh1t3uosHJWRX0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050-osd--block--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050', 'dm-uuid-LVM-9JpExgtlxdPuoWmJNoQ2AZCX55bgBWMtMY2NJ988mICAB3y3WMqAu2EyPho90or4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:52:18.425967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part1', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part14', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part15', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part16', 'scsi-SQEMU_QEMU_HARDDISK_b705f53a-fcc8-4831-99c5-1b34182e7d6c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.425987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--959a80fb--1de6--50df--b35c--a247ba0dd9c7-osd--block--959a80fb--1de6--50df--b35c--a247ba0dd9c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sBWKjx-mczp-poSW-IrWk-PI53-Hypr-rwsvAM', 'scsi-0QEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7', 'scsi-SQEMU_QEMU_HARDDISK_ef05168f-fb35-4f94-a2bc-4c842347eaa7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.426000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050-osd--block--cc43dffc--fbc4--5f6e--b48c--5e4474ee7050'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xLZkou-nOM8-FMbI-J1uc-Uq2c-XtnG-NwevIN', 'scsi-0QEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c', 'scsi-SQEMU_QEMU_HARDDISK_e20e1bf7-86dc-47fb-9aa6-1525bff9bd7c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.426053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8', 'scsi-SQEMU_QEMU_HARDDISK_3b8b6537-11b2-4db3-b62a-18312f3aa6f8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.426070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:52:18.426090 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.426102 | orchestrator | 2025-04-01 19:52:18.426115 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-01 19:52:18.426128 | orchestrator | Tuesday 01 April 2025 19:50:32 +0000 (0:00:00.758) 0:00:20.134 ********* 2025-04-01 19:52:18.426141 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-01 19:52:18.426153 | orchestrator | 2025-04-01 19:52:18.426166 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-01 19:52:18.426178 | orchestrator | Tuesday 01 April 2025 19:50:33 +0000 (0:00:01.618) 0:00:21.752 ********* 2025-04-01 19:52:18.426190 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.426203 | orchestrator | 2025-04-01 19:52:18.426215 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-01 19:52:18.426227 | orchestrator | Tuesday 01 April 2025 19:50:33 +0000 (0:00:00.168) 0:00:21.921 ********* 2025-04-01 19:52:18.426240 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.426253 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.426268 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.426281 | orchestrator | 2025-04-01 19:52:18.426294 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-01 19:52:18.426307 | orchestrator | Tuesday 01 April 2025 19:50:34 +0000 (0:00:00.403) 0:00:22.325 ********* 2025-04-01 19:52:18.426320 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.426333 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.426345 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.426358 | orchestrator | 2025-04-01 19:52:18.426370 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-01 19:52:18.426383 | orchestrator | Tuesday 01 April 2025 19:50:35 +0000 (0:00:00.688) 0:00:23.013 ********* 2025-04-01 19:52:18.426395 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.426407 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.426420 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.426432 | orchestrator | 2025-04-01 19:52:18.426445 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-01 19:52:18.426457 | orchestrator | Tuesday 01 April 2025 19:50:35 +0000 (0:00:00.308) 0:00:23.322 ********* 2025-04-01 19:52:18.426469 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.426482 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.426494 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.426507 | orchestrator | 2025-04-01 19:52:18.426519 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-01 19:52:18.426531 | orchestrator | Tuesday 01 April 2025 19:50:36 +0000 (0:00:00.914) 0:00:24.236 ********* 2025-04-01 19:52:18.426544 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.426556 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.426569 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.426581 | orchestrator | 2025-04-01 19:52:18.426594 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-01 19:52:18.426606 | orchestrator | Tuesday 01 April 2025 19:50:36 +0000 (0:00:00.321) 0:00:24.557 ********* 2025-04-01 19:52:18.426618 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.426631 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.426643 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.426673 | orchestrator | 2025-04-01 19:52:18.426686 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-01 19:52:18.426699 | orchestrator | Tuesday 01 April 2025 19:50:37 +0000 (0:00:00.514) 0:00:25.072 ********* 2025-04-01 19:52:18.426717 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.426730 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.426742 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.426755 | orchestrator | 2025-04-01 19:52:18.426768 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-01 19:52:18.426780 | orchestrator | Tuesday 01 April 2025 19:50:37 +0000 (0:00:00.356) 0:00:25.429 ********* 2025-04-01 19:52:18.426793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:52:18.426805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:52:18.426818 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:52:18.426831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:52:18.426843 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.426861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:52:18.426874 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:52:18.426886 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:52:18.426899 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:52:18.426911 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.426924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:52:18.426936 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.426949 | orchestrator | 2025-04-01 19:52:18.426961 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-01 19:52:18.426979 | orchestrator | Tuesday 01 April 2025 19:50:38 +0000 (0:00:01.019) 0:00:26.449 ********* 2025-04-01 19:52:18.426992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:52:18.427005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:52:18.427018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:52:18.427030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:52:18.427042 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:52:18.427055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:52:18.427067 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.427080 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:52:18.427092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:52:18.427104 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.427117 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:52:18.427129 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.427142 | orchestrator | 2025-04-01 19:52:18.427154 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-01 19:52:18.427167 | orchestrator | Tuesday 01 April 2025 19:50:39 +0000 (0:00:00.656) 0:00:27.105 ********* 2025-04-01 19:52:18.427179 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-01 19:52:18.427192 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-01 19:52:18.427205 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-01 19:52:18.427217 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-01 19:52:18.427230 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-01 19:52:18.427242 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-01 19:52:18.427255 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-01 19:52:18.427267 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-01 19:52:18.427279 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-01 19:52:18.427292 | orchestrator | 2025-04-01 19:52:18.427304 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-01 19:52:18.427321 | orchestrator | Tuesday 01 April 2025 19:50:40 +0000 (0:00:01.505) 0:00:28.611 ********* 2025-04-01 19:52:18.427339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:52:18.427352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:52:18.427364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:52:18.427377 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.427390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:52:18.427402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:52:18.427414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:52:18.427427 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.427439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:52:18.427452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:52:18.427464 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:52:18.427477 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.427489 | orchestrator | 2025-04-01 19:52:18.427502 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-01 19:52:18.427514 | orchestrator | Tuesday 01 April 2025 19:50:41 +0000 (0:00:00.715) 0:00:29.326 ********* 2025-04-01 19:52:18.427527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-01 19:52:18.427539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-01 19:52:18.427552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-01 19:52:18.427564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-01 19:52:18.427577 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-01 19:52:18.427589 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.427602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-01 19:52:18.427614 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.427627 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-01 19:52:18.427639 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-01 19:52:18.427692 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-01 19:52:18.427707 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.427720 | orchestrator | 2025-04-01 19:52:18.427733 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-01 19:52:18.427745 | orchestrator | Tuesday 01 April 2025 19:50:41 +0000 (0:00:00.475) 0:00:29.801 ********* 2025-04-01 19:52:18.427758 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:52:18.427770 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:52:18.427783 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:52:18.427796 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.427809 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:52:18.427822 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:52:18.427835 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:52:18.427847 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.427860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-01 19:52:18.427879 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:52:18.427896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:52:18.427909 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.427922 | orchestrator | 2025-04-01 19:52:18.427935 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-01 19:52:18.427954 | orchestrator | Tuesday 01 April 2025 19:50:42 +0000 (0:00:00.437) 0:00:30.239 ********* 2025-04-01 19:52:18.427967 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:52:18.427979 | orchestrator | 2025-04-01 19:52:18.427992 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-01 19:52:18.428005 | orchestrator | Tuesday 01 April 2025 19:50:43 +0000 (0:00:00.814) 0:00:31.053 ********* 2025-04-01 19:52:18.428017 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428030 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428042 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428055 | orchestrator | 2025-04-01 19:52:18.428067 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-01 19:52:18.428080 | orchestrator | Tuesday 01 April 2025 19:50:43 +0000 (0:00:00.358) 0:00:31.412 ********* 2025-04-01 19:52:18.428092 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428105 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428118 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428130 | orchestrator | 2025-04-01 19:52:18.428143 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-01 19:52:18.428155 | orchestrator | Tuesday 01 April 2025 19:50:43 +0000 (0:00:00.370) 0:00:31.782 ********* 2025-04-01 19:52:18.428168 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428180 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428192 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428205 | orchestrator | 2025-04-01 19:52:18.428216 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-01 19:52:18.428226 | orchestrator | Tuesday 01 April 2025 19:50:44 +0000 (0:00:00.350) 0:00:32.133 ********* 2025-04-01 19:52:18.428236 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.428247 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.428257 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.428267 | orchestrator | 2025-04-01 19:52:18.428278 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-01 19:52:18.428288 | orchestrator | Tuesday 01 April 2025 19:50:44 +0000 (0:00:00.727) 0:00:32.860 ********* 2025-04-01 19:52:18.428298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:52:18.428308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:52:18.428318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:52:18.428328 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428339 | orchestrator | 2025-04-01 19:52:18.428349 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-01 19:52:18.428359 | orchestrator | Tuesday 01 April 2025 19:50:45 +0000 (0:00:00.407) 0:00:33.268 ********* 2025-04-01 19:52:18.428369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:52:18.428380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:52:18.428396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:52:18.428407 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428417 | orchestrator | 2025-04-01 19:52:18.428427 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-01 19:52:18.428438 | orchestrator | Tuesday 01 April 2025 19:50:45 +0000 (0:00:00.439) 0:00:33.708 ********* 2025-04-01 19:52:18.428448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:52:18.428458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:52:18.428468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:52:18.428478 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428489 | orchestrator | 2025-04-01 19:52:18.428499 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:52:18.428509 | orchestrator | Tuesday 01 April 2025 19:50:46 +0000 (0:00:00.450) 0:00:34.158 ********* 2025-04-01 19:52:18.428525 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:52:18.428539 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:52:18.428550 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:52:18.428560 | orchestrator | 2025-04-01 19:52:18.428570 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-01 19:52:18.428593 | orchestrator | Tuesday 01 April 2025 19:50:46 +0000 (0:00:00.404) 0:00:34.562 ********* 2025-04-01 19:52:18.428604 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-01 19:52:18.428624 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-01 19:52:18.428634 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-01 19:52:18.428644 | orchestrator | 2025-04-01 19:52:18.428668 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-01 19:52:18.428679 | orchestrator | Tuesday 01 April 2025 19:50:47 +0000 (0:00:00.550) 0:00:35.113 ********* 2025-04-01 19:52:18.428689 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428699 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428709 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428719 | orchestrator | 2025-04-01 19:52:18.428729 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-01 19:52:18.428740 | orchestrator | Tuesday 01 April 2025 19:50:47 +0000 (0:00:00.579) 0:00:35.692 ********* 2025-04-01 19:52:18.428750 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428760 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428770 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428780 | orchestrator | 2025-04-01 19:52:18.428790 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-01 19:52:18.428805 | orchestrator | Tuesday 01 April 2025 19:50:48 +0000 (0:00:00.406) 0:00:36.098 ********* 2025-04-01 19:52:18.428815 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-01 19:52:18.428826 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428836 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-01 19:52:18.428846 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428856 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-01 19:52:18.428866 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428876 | orchestrator | 2025-04-01 19:52:18.428887 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-01 19:52:18.428897 | orchestrator | Tuesday 01 April 2025 19:50:48 +0000 (0:00:00.513) 0:00:36.612 ********* 2025-04-01 19:52:18.428907 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-01 19:52:18.428917 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.428928 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-01 19:52:18.428938 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.428948 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-01 19:52:18.428959 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.428969 | orchestrator | 2025-04-01 19:52:18.428984 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-01 19:52:18.428994 | orchestrator | Tuesday 01 April 2025 19:50:49 +0000 (0:00:00.373) 0:00:36.985 ********* 2025-04-01 19:52:18.429004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-01 19:52:18.429015 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-01 19:52:18.429025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-01 19:52:18.429035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-01 19:52:18.429045 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.429056 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-01 19:52:18.429065 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-01 19:52:18.429081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-01 19:52:18.429091 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-01 19:52:18.429101 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.429111 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-01 19:52:18.429121 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.429132 | orchestrator | 2025-04-01 19:52:18.429142 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-01 19:52:18.429152 | orchestrator | Tuesday 01 April 2025 19:50:50 +0000 (0:00:01.113) 0:00:38.099 ********* 2025-04-01 19:52:18.429162 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.429172 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.429183 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:52:18.429193 | orchestrator | 2025-04-01 19:52:18.429203 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-01 19:52:18.429213 | orchestrator | Tuesday 01 April 2025 19:50:50 +0000 (0:00:00.418) 0:00:38.518 ********* 2025-04-01 19:52:18.429223 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-01 19:52:18.429234 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:52:18.429244 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:52:18.429254 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-01 19:52:18.429264 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-01 19:52:18.429274 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-01 19:52:18.429284 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-01 19:52:18.429295 | orchestrator | 2025-04-01 19:52:18.429305 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-01 19:52:18.429315 | orchestrator | Tuesday 01 April 2025 19:50:51 +0000 (0:00:01.186) 0:00:39.704 ********* 2025-04-01 19:52:18.429325 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-01 19:52:18.429335 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:52:18.429346 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:52:18.429356 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-01 19:52:18.429366 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-01 19:52:18.429376 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-01 19:52:18.429387 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-01 19:52:18.429397 | orchestrator | 2025-04-01 19:52:18.429407 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-04-01 19:52:18.429417 | orchestrator | Tuesday 01 April 2025 19:50:53 +0000 (0:00:01.974) 0:00:41.679 ********* 2025-04-01 19:52:18.429428 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:52:18.429438 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:52:18.429448 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-04-01 19:52:18.429458 | orchestrator | 2025-04-01 19:52:18.429468 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-04-01 19:52:18.429482 | orchestrator | Tuesday 01 April 2025 19:50:54 +0000 (0:00:00.597) 0:00:42.277 ********* 2025-04-01 19:52:18.429494 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:52:18.429511 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:52:18.429521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:52:18.429532 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:52:18.429543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-01 19:52:18.429553 | orchestrator | 2025-04-01 19:52:18.429563 | orchestrator | TASK [generate keys] *********************************************************** 2025-04-01 19:52:18.429574 | orchestrator | Tuesday 01 April 2025 19:51:30 +0000 (0:00:35.827) 0:01:18.104 ********* 2025-04-01 19:52:18.429584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429594 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429604 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429614 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429649 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-04-01 19:52:18.429672 | orchestrator | 2025-04-01 19:52:18.429682 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-04-01 19:52:18.429692 | orchestrator | Tuesday 01 April 2025 19:51:48 +0000 (0:00:18.797) 0:01:36.901 ********* 2025-04-01 19:52:18.429703 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429713 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429723 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429734 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429744 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429754 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429764 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-01 19:52:18.429774 | orchestrator | 2025-04-01 19:52:18.429784 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-04-01 19:52:18.429794 | orchestrator | Tuesday 01 April 2025 19:51:59 +0000 (0:00:10.177) 0:01:47.079 ********* 2025-04-01 19:52:18.429805 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429815 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-01 19:52:18.429825 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-01 19:52:18.429835 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429845 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-01 19:52:18.429861 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-01 19:52:18.429871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429881 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-01 19:52:18.429891 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-01 19:52:18.429902 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:18.429912 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-01 19:52:18.429926 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-01 19:52:21.479997 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:21.480119 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-01 19:52:21.480137 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-01 19:52:21.480152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-01 19:52:21.480167 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-01 19:52:21.480181 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-01 19:52:21.480196 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-04-01 19:52:21.480211 | orchestrator | 2025-04-01 19:52:21.480225 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:52:21.480241 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-01 19:52:21.480257 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-04-01 19:52:21.480289 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-04-01 19:52:21.480304 | orchestrator | 2025-04-01 19:52:21.480318 | orchestrator | 2025-04-01 19:52:21.480332 | orchestrator | 2025-04-01 19:52:21.480346 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:52:21.480360 | orchestrator | Tuesday 01 April 2025 19:52:16 +0000 (0:00:17.481) 0:02:04.560 ********* 2025-04-01 19:52:21.480379 | orchestrator | =============================================================================== 2025-04-01 19:52:21.480393 | orchestrator | create openstack pool(s) ----------------------------------------------- 35.83s 2025-04-01 19:52:21.480407 | orchestrator | generate keys ---------------------------------------------------------- 18.80s 2025-04-01 19:52:21.480421 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.48s 2025-04-01 19:52:21.480435 | orchestrator | get keys from monitors ------------------------------------------------- 10.18s 2025-04-01 19:52:21.480449 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.66s 2025-04-01 19:52:21.480463 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.97s 2025-04-01 19:52:21.480477 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.62s 2025-04-01 19:52:21.480491 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.51s 2025-04-01 19:52:21.480505 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.28s 2025-04-01 19:52:21.480519 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.19s 2025-04-01 19:52:21.480535 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 1.11s 2025-04-01 19:52:21.480551 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.02s 2025-04-01 19:52:21.480567 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.91s 2025-04-01 19:52:21.480606 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.91s 2025-04-01 19:52:21.480622 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.83s 2025-04-01 19:52:21.480637 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.81s 2025-04-01 19:52:21.480681 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.78s 2025-04-01 19:52:21.480698 | orchestrator | ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.76s 2025-04-01 19:52:21.480713 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-04-01 19:52:21.480727 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.73s 2025-04-01 19:52:21.480741 | orchestrator | 2025-04-01 19:52:18 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:21.480755 | orchestrator | 2025-04-01 19:52:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:21.480788 | orchestrator | 2025-04-01 19:52:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:21.481774 | orchestrator | 2025-04-01 19:52:21 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:21.483888 | orchestrator | 2025-04-01 19:52:21 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:21.486336 | orchestrator | 2025-04-01 19:52:21 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:24.550586 | orchestrator | 2025-04-01 19:52:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:24.550759 | orchestrator | 2025-04-01 19:52:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:24.552607 | orchestrator | 2025-04-01 19:52:24 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state STARTED 2025-04-01 19:52:24.557036 | orchestrator | 2025-04-01 19:52:24 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:24.557320 | orchestrator | 2025-04-01 19:52:24 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:27.614152 | orchestrator | 2025-04-01 19:52:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:27.614288 | orchestrator | 2025-04-01 19:52:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:27.615999 | orchestrator | 2025-04-01 19:52:27 | INFO  | Task 7245e0be-cd0c-452e-b740-4cc039b7ffeb is in state SUCCESS 2025-04-01 19:52:27.619099 | orchestrator | 2025-04-01 19:52:27.619143 | orchestrator | 2025-04-01 19:52:27.619331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:52:27.619352 | orchestrator | 2025-04-01 19:52:27.619367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:52:27.619382 | orchestrator | Tuesday 01 April 2025 19:50:56 +0000 (0:00:00.347) 0:00:00.347 ********* 2025-04-01 19:52:27.619397 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.619413 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.619428 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.619442 | orchestrator | 2025-04-01 19:52:27.619457 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:52:27.619471 | orchestrator | Tuesday 01 April 2025 19:50:56 +0000 (0:00:00.416) 0:00:00.763 ********* 2025-04-01 19:52:27.619486 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-04-01 19:52:27.619501 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-04-01 19:52:27.619515 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-04-01 19:52:27.619529 | orchestrator | 2025-04-01 19:52:27.619544 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-04-01 19:52:27.619584 | orchestrator | 2025-04-01 19:52:27.619599 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-01 19:52:27.619614 | orchestrator | Tuesday 01 April 2025 19:50:57 +0000 (0:00:00.323) 0:00:01.087 ********* 2025-04-01 19:52:27.619629 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:52:27.619645 | orchestrator | 2025-04-01 19:52:27.619689 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-04-01 19:52:27.619705 | orchestrator | Tuesday 01 April 2025 19:50:57 +0000 (0:00:00.796) 0:00:01.884 ********* 2025-04-01 19:52:27.619737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.619773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.619811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.619826 | orchestrator | 2025-04-01 19:52:27.620046 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-04-01 19:52:27.620070 | orchestrator | Tuesday 01 April 2025 19:50:59 +0000 (0:00:01.624) 0:00:03.508 ********* 2025-04-01 19:52:27.620085 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.620101 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.620117 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.620132 | orchestrator | 2025-04-01 19:52:27.620147 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-01 19:52:27.620163 | orchestrator | Tuesday 01 April 2025 19:50:59 +0000 (0:00:00.295) 0:00:03.804 ********* 2025-04-01 19:52:27.620187 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-01 19:52:27.620204 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-04-01 19:52:27.620226 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-04-01 19:52:27.620242 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-04-01 19:52:27.620257 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-04-01 19:52:27.620272 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-04-01 19:52:27.620288 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-04-01 19:52:27.620304 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-01 19:52:27.620318 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-04-01 19:52:27.620332 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-04-01 19:52:27.620346 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-04-01 19:52:27.620360 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-04-01 19:52:27.620374 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-04-01 19:52:27.620388 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-04-01 19:52:27.620402 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-01 19:52:27.620415 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-04-01 19:52:27.620429 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-04-01 19:52:27.620443 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-04-01 19:52:27.620457 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-04-01 19:52:27.620471 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-04-01 19:52:27.620485 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-04-01 19:52:27.620500 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-04-01 19:52:27.620516 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-04-01 19:52:27.620530 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-04-01 19:52:27.620544 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-04-01 19:52:27.620558 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-04-01 19:52:27.620573 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-04-01 19:52:27.620587 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-04-01 19:52:27.620601 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-04-01 19:52:27.620615 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-04-01 19:52:27.620629 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-04-01 19:52:27.620698 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-04-01 19:52:27.620715 | orchestrator | 2025-04-01 19:52:27.620730 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.620744 | orchestrator | Tuesday 01 April 2025 19:51:00 +0000 (0:00:01.148) 0:00:04.953 ********* 2025-04-01 19:52:27.620758 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.620772 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.620786 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.620800 | orchestrator | 2025-04-01 19:52:27.620814 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.620828 | orchestrator | Tuesday 01 April 2025 19:51:01 +0000 (0:00:00.510) 0:00:05.463 ********* 2025-04-01 19:52:27.620842 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.620857 | orchestrator | 2025-04-01 19:52:27.620878 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.620892 | orchestrator | Tuesday 01 April 2025 19:51:01 +0000 (0:00:00.150) 0:00:05.614 ********* 2025-04-01 19:52:27.620906 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.620920 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.620934 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.620948 | orchestrator | 2025-04-01 19:52:27.620962 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.620976 | orchestrator | Tuesday 01 April 2025 19:51:02 +0000 (0:00:00.428) 0:00:06.042 ********* 2025-04-01 19:52:27.620990 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.621004 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.621018 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.621032 | orchestrator | 2025-04-01 19:52:27.621053 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.621067 | orchestrator | Tuesday 01 April 2025 19:51:02 +0000 (0:00:00.337) 0:00:06.380 ********* 2025-04-01 19:52:27.621081 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621095 | orchestrator | 2025-04-01 19:52:27.621109 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.621123 | orchestrator | Tuesday 01 April 2025 19:51:02 +0000 (0:00:00.277) 0:00:06.658 ********* 2025-04-01 19:52:27.621137 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621151 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.621165 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.621179 | orchestrator | 2025-04-01 19:52:27.621193 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.621207 | orchestrator | Tuesday 01 April 2025 19:51:03 +0000 (0:00:00.363) 0:00:07.021 ********* 2025-04-01 19:52:27.621221 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.621235 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.621343 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.621364 | orchestrator | 2025-04-01 19:52:27.621378 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.621392 | orchestrator | Tuesday 01 April 2025 19:51:03 +0000 (0:00:00.464) 0:00:07.486 ********* 2025-04-01 19:52:27.621406 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621420 | orchestrator | 2025-04-01 19:52:27.621434 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.621447 | orchestrator | Tuesday 01 April 2025 19:51:03 +0000 (0:00:00.130) 0:00:07.617 ********* 2025-04-01 19:52:27.621461 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621475 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.621496 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.621510 | orchestrator | 2025-04-01 19:52:27.621525 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.621539 | orchestrator | Tuesday 01 April 2025 19:51:04 +0000 (0:00:00.450) 0:00:08.067 ********* 2025-04-01 19:52:27.621561 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.621576 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.621590 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.621604 | orchestrator | 2025-04-01 19:52:27.621618 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.621632 | orchestrator | Tuesday 01 April 2025 19:51:04 +0000 (0:00:00.447) 0:00:08.515 ********* 2025-04-01 19:52:27.621645 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621718 | orchestrator | 2025-04-01 19:52:27.621733 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.621748 | orchestrator | Tuesday 01 April 2025 19:51:04 +0000 (0:00:00.129) 0:00:08.644 ********* 2025-04-01 19:52:27.621762 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621776 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.621790 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.621804 | orchestrator | 2025-04-01 19:52:27.621817 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.621831 | orchestrator | Tuesday 01 April 2025 19:51:05 +0000 (0:00:00.455) 0:00:09.100 ********* 2025-04-01 19:52:27.621845 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.621859 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.621873 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.621888 | orchestrator | 2025-04-01 19:52:27.621902 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.621915 | orchestrator | Tuesday 01 April 2025 19:51:05 +0000 (0:00:00.291) 0:00:09.392 ********* 2025-04-01 19:52:27.621929 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621943 | orchestrator | 2025-04-01 19:52:27.621956 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.621968 | orchestrator | Tuesday 01 April 2025 19:51:05 +0000 (0:00:00.252) 0:00:09.644 ********* 2025-04-01 19:52:27.621980 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.621993 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.622005 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.622071 | orchestrator | 2025-04-01 19:52:27.622087 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.622100 | orchestrator | Tuesday 01 April 2025 19:51:05 +0000 (0:00:00.309) 0:00:09.954 ********* 2025-04-01 19:52:27.622113 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.622125 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.622138 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.622150 | orchestrator | 2025-04-01 19:52:27.622163 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.622175 | orchestrator | Tuesday 01 April 2025 19:51:06 +0000 (0:00:00.501) 0:00:10.455 ********* 2025-04-01 19:52:27.622188 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622200 | orchestrator | 2025-04-01 19:52:27.622213 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.622225 | orchestrator | Tuesday 01 April 2025 19:51:06 +0000 (0:00:00.126) 0:00:10.581 ********* 2025-04-01 19:52:27.622237 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622250 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.622263 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.622275 | orchestrator | 2025-04-01 19:52:27.622287 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.622300 | orchestrator | Tuesday 01 April 2025 19:51:07 +0000 (0:00:00.457) 0:00:11.039 ********* 2025-04-01 19:52:27.622320 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.622334 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.622346 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.622358 | orchestrator | 2025-04-01 19:52:27.622371 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.622389 | orchestrator | Tuesday 01 April 2025 19:51:07 +0000 (0:00:00.503) 0:00:11.542 ********* 2025-04-01 19:52:27.622402 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622426 | orchestrator | 2025-04-01 19:52:27.622438 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.622451 | orchestrator | Tuesday 01 April 2025 19:51:07 +0000 (0:00:00.134) 0:00:11.677 ********* 2025-04-01 19:52:27.622463 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622475 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.622488 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.622500 | orchestrator | 2025-04-01 19:52:27.622512 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.622524 | orchestrator | Tuesday 01 April 2025 19:51:08 +0000 (0:00:00.500) 0:00:12.178 ********* 2025-04-01 19:52:27.622537 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.622549 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.622562 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.622574 | orchestrator | 2025-04-01 19:52:27.622587 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.622599 | orchestrator | Tuesday 01 April 2025 19:51:08 +0000 (0:00:00.501) 0:00:12.679 ********* 2025-04-01 19:52:27.622612 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622624 | orchestrator | 2025-04-01 19:52:27.622636 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.622648 | orchestrator | Tuesday 01 April 2025 19:51:08 +0000 (0:00:00.140) 0:00:12.820 ********* 2025-04-01 19:52:27.622679 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622692 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.622704 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.622716 | orchestrator | 2025-04-01 19:52:27.622729 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.622741 | orchestrator | Tuesday 01 April 2025 19:51:09 +0000 (0:00:00.481) 0:00:13.301 ********* 2025-04-01 19:52:27.622754 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.622767 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.622779 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.622791 | orchestrator | 2025-04-01 19:52:27.622804 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.622817 | orchestrator | Tuesday 01 April 2025 19:51:09 +0000 (0:00:00.387) 0:00:13.689 ********* 2025-04-01 19:52:27.622829 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622841 | orchestrator | 2025-04-01 19:52:27.622853 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.622866 | orchestrator | Tuesday 01 April 2025 19:51:09 +0000 (0:00:00.130) 0:00:13.820 ********* 2025-04-01 19:52:27.622878 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.622891 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.622903 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.622915 | orchestrator | 2025-04-01 19:52:27.622928 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.622940 | orchestrator | Tuesday 01 April 2025 19:51:10 +0000 (0:00:00.481) 0:00:14.302 ********* 2025-04-01 19:52:27.622953 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.622965 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.622978 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.622990 | orchestrator | 2025-04-01 19:52:27.623003 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.623015 | orchestrator | Tuesday 01 April 2025 19:51:10 +0000 (0:00:00.489) 0:00:14.792 ********* 2025-04-01 19:52:27.623028 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.623040 | orchestrator | 2025-04-01 19:52:27.623053 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.623065 | orchestrator | Tuesday 01 April 2025 19:51:10 +0000 (0:00:00.141) 0:00:14.933 ********* 2025-04-01 19:52:27.623077 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.623090 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.623102 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.623115 | orchestrator | 2025-04-01 19:52:27.623133 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-01 19:52:27.623146 | orchestrator | Tuesday 01 April 2025 19:51:11 +0000 (0:00:00.476) 0:00:15.409 ********* 2025-04-01 19:52:27.623158 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:52:27.623175 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:52:27.623188 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:52:27.623201 | orchestrator | 2025-04-01 19:52:27.623214 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-01 19:52:27.623226 | orchestrator | Tuesday 01 April 2025 19:51:12 +0000 (0:00:00.709) 0:00:16.118 ********* 2025-04-01 19:52:27.623238 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.623251 | orchestrator | 2025-04-01 19:52:27.623263 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-01 19:52:27.623276 | orchestrator | Tuesday 01 April 2025 19:51:12 +0000 (0:00:00.174) 0:00:16.293 ********* 2025-04-01 19:52:27.623288 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.623301 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.623313 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.623326 | orchestrator | 2025-04-01 19:52:27.623338 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-04-01 19:52:27.623351 | orchestrator | Tuesday 01 April 2025 19:51:12 +0000 (0:00:00.503) 0:00:16.796 ********* 2025-04-01 19:52:27.623363 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:52:27.623375 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:52:27.623388 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:52:27.623400 | orchestrator | 2025-04-01 19:52:27.623413 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-04-01 19:52:27.623425 | orchestrator | Tuesday 01 April 2025 19:51:15 +0000 (0:00:02.865) 0:00:19.661 ********* 2025-04-01 19:52:27.623437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-01 19:52:27.623456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-01 19:52:27.623469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-01 19:52:27.623481 | orchestrator | 2025-04-01 19:52:27.623494 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-04-01 19:52:27.623511 | orchestrator | Tuesday 01 April 2025 19:51:18 +0000 (0:00:03.198) 0:00:22.860 ********* 2025-04-01 19:52:27.623523 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-01 19:52:27.623536 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-01 19:52:27.623549 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-01 19:52:27.623562 | orchestrator | 2025-04-01 19:52:27.623574 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-04-01 19:52:27.623587 | orchestrator | Tuesday 01 April 2025 19:51:21 +0000 (0:00:03.015) 0:00:25.875 ********* 2025-04-01 19:52:27.623599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-01 19:52:27.623612 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-01 19:52:27.623624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-01 19:52:27.623637 | orchestrator | 2025-04-01 19:52:27.623649 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-04-01 19:52:27.623678 | orchestrator | Tuesday 01 April 2025 19:51:24 +0000 (0:00:02.335) 0:00:28.210 ********* 2025-04-01 19:52:27.623691 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.623704 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.623717 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.623729 | orchestrator | 2025-04-01 19:52:27.623742 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-04-01 19:52:27.623760 | orchestrator | Tuesday 01 April 2025 19:51:24 +0000 (0:00:00.315) 0:00:28.526 ********* 2025-04-01 19:52:27.623772 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.623785 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.623797 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.623810 | orchestrator | 2025-04-01 19:52:27.623823 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-01 19:52:27.623835 | orchestrator | Tuesday 01 April 2025 19:51:24 +0000 (0:00:00.460) 0:00:28.986 ********* 2025-04-01 19:52:27.623848 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:52:27.623860 | orchestrator | 2025-04-01 19:52:27.623873 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-04-01 19:52:27.623885 | orchestrator | Tuesday 01 April 2025 19:51:25 +0000 (0:00:00.913) 0:00:29.899 ********* 2025-04-01 19:52:27.623907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.623924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.623953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.623967 | orchestrator | 2025-04-01 19:52:27.623980 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-04-01 19:52:27.623993 | orchestrator | Tuesday 01 April 2025 19:51:27 +0000 (0:00:01.663) 0:00:31.563 ********* 2025-04-01 19:52:27.624006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:52:27.624030 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.624051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:52:27.624072 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.624085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:52:27.624099 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.624112 | orchestrator | 2025-04-01 19:52:27.624124 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-04-01 19:52:27.624137 | orchestrator | Tuesday 01 April 2025 19:51:28 +0000 (0:00:00.896) 0:00:32.460 ********* 2025-04-01 19:52:27.624158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:52:27.624178 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.624191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:52:27.624205 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.624227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-01 19:52:27.624247 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.624259 | orchestrator | 2025-04-01 19:52:27.624272 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-04-01 19:52:27.624284 | orchestrator | Tuesday 01 April 2025 19:51:29 +0000 (0:00:01.383) 0:00:33.843 ********* 2025-04-01 19:52:27.624302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.624316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.624344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-01 19:52:27.624358 | orchestrator | 2025-04-01 19:52:27.624376 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-01 19:52:27.624389 | orchestrator | Tuesday 01 April 2025 19:51:35 +0000 (0:00:05.919) 0:00:39.762 ********* 2025-04-01 19:52:27.624402 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:52:27.624414 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:52:27.624427 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:52:27.624439 | orchestrator | 2025-04-01 19:52:27.624452 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-01 19:52:27.624464 | orchestrator | Tuesday 01 April 2025 19:51:36 +0000 (0:00:00.526) 0:00:40.288 ********* 2025-04-01 19:52:27.624477 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:52:27.624490 | orchestrator | 2025-04-01 19:52:27.624502 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-04-01 19:52:27.624515 | orchestrator | Tuesday 01 April 2025 19:51:36 +0000 (0:00:00.710) 0:00:40.999 ********* 2025-04-01 19:52:27.624527 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:52:27.624539 | orchestrator | 2025-04-01 19:52:27.624552 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-04-01 19:52:27.624564 | orchestrator | Tuesday 01 April 2025 19:51:39 +0000 (0:00:02.749) 0:00:43.748 ********* 2025-04-01 19:52:27.624577 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:52:27.624589 | orchestrator | 2025-04-01 19:52:27.624602 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-04-01 19:52:27.624614 | orchestrator | Tuesday 01 April 2025 19:51:41 +0000 (0:00:02.077) 0:00:45.826 ********* 2025-04-01 19:52:27.624627 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:52:27.624639 | orchestrator | 2025-04-01 19:52:27.624694 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-01 19:52:27.624710 | orchestrator | Tuesday 01 April 2025 19:51:56 +0000 (0:00:14.610) 0:01:00.437 ********* 2025-04-01 19:52:27.624722 | orchestrator | 2025-04-01 19:52:27.624735 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-01 19:52:27.624747 | orchestrator | Tuesday 01 April 2025 19:51:56 +0000 (0:00:00.078) 0:01:00.515 ********* 2025-04-01 19:52:27.624760 | orchestrator | 2025-04-01 19:52:27.624772 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-01 19:52:27.624789 | orchestrator | Tuesday 01 April 2025 19:51:56 +0000 (0:00:00.203) 0:01:00.718 ********* 2025-04-01 19:52:27.624801 | orchestrator | 2025-04-01 19:52:27.624811 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-04-01 19:52:27.624821 | orchestrator | Tuesday 01 April 2025 19:51:56 +0000 (0:00:00.063) 0:01:00.782 ********* 2025-04-01 19:52:27.624831 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:52:27.624841 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:52:27.624852 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:52:27.624862 | orchestrator | 2025-04-01 19:52:27.624872 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:52:27.624883 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-01 19:52:27.624893 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-01 19:52:27.624992 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-01 19:52:27.625004 | orchestrator | 2025-04-01 19:52:27.625014 | orchestrator | 2025-04-01 19:52:27.625025 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:52:27.625035 | orchestrator | Tuesday 01 April 2025 19:52:24 +0000 (0:00:27.423) 0:01:28.206 ********* 2025-04-01 19:52:27.625045 | orchestrator | =============================================================================== 2025-04-01 19:52:27.625056 | orchestrator | horizon : Restart horizon container ------------------------------------ 27.42s 2025-04-01 19:52:27.625073 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.61s 2025-04-01 19:52:27.625083 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.92s 2025-04-01 19:52:27.625093 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.20s 2025-04-01 19:52:27.625104 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.02s 2025-04-01 19:52:27.625114 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.87s 2025-04-01 19:52:27.625124 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.75s 2025-04-01 19:52:27.625134 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.34s 2025-04-01 19:52:27.625144 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.08s 2025-04-01 19:52:27.625154 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.66s 2025-04-01 19:52:27.625165 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.62s 2025-04-01 19:52:27.625175 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.38s 2025-04-01 19:52:27.625185 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.15s 2025-04-01 19:52:27.625201 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.91s 2025-04-01 19:52:27.626274 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.90s 2025-04-01 19:52:27.626294 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-04-01 19:52:27.626305 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-04-01 19:52:27.626315 | orchestrator | horizon : Update policy file name --------------------------------------- 0.71s 2025-04-01 19:52:27.626325 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-04-01 19:52:27.626335 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-04-01 19:52:27.626346 | orchestrator | 2025-04-01 19:52:27 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:27.626356 | orchestrator | 2025-04-01 19:52:27 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:27.626371 | orchestrator | 2025-04-01 19:52:27 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:30.684519 | orchestrator | 2025-04-01 19:52:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:30.684688 | orchestrator | 2025-04-01 19:52:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:30.685112 | orchestrator | 2025-04-01 19:52:30 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:30.686242 | orchestrator | 2025-04-01 19:52:30 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:30.692524 | orchestrator | 2025-04-01 19:52:30 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:33.731048 | orchestrator | 2025-04-01 19:52:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:33.731187 | orchestrator | 2025-04-01 19:52:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:33.732701 | orchestrator | 2025-04-01 19:52:33 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:33.733989 | orchestrator | 2025-04-01 19:52:33 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:33.734870 | orchestrator | 2025-04-01 19:52:33 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:36.777482 | orchestrator | 2025-04-01 19:52:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:36.777641 | orchestrator | 2025-04-01 19:52:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:36.779799 | orchestrator | 2025-04-01 19:52:36 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:36.782709 | orchestrator | 2025-04-01 19:52:36 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:36.785230 | orchestrator | 2025-04-01 19:52:36 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:36.785543 | orchestrator | 2025-04-01 19:52:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:39.839760 | orchestrator | 2025-04-01 19:52:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:39.842702 | orchestrator | 2025-04-01 19:52:39 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:39.845107 | orchestrator | 2025-04-01 19:52:39 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:39.847437 | orchestrator | 2025-04-01 19:52:39 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:39.848165 | orchestrator | 2025-04-01 19:52:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:42.899556 | orchestrator | 2025-04-01 19:52:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:42.900346 | orchestrator | 2025-04-01 19:52:42 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:42.901461 | orchestrator | 2025-04-01 19:52:42 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:42.902483 | orchestrator | 2025-04-01 19:52:42 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:45.956979 | orchestrator | 2025-04-01 19:52:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:45.957098 | orchestrator | 2025-04-01 19:52:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:45.960214 | orchestrator | 2025-04-01 19:52:45 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:45.964737 | orchestrator | 2025-04-01 19:52:45 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:45.966961 | orchestrator | 2025-04-01 19:52:45 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:49.022346 | orchestrator | 2025-04-01 19:52:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:49.022487 | orchestrator | 2025-04-01 19:52:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:49.022938 | orchestrator | 2025-04-01 19:52:49 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:49.024272 | orchestrator | 2025-04-01 19:52:49 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:49.025509 | orchestrator | 2025-04-01 19:52:49 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:49.025808 | orchestrator | 2025-04-01 19:52:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:52.079227 | orchestrator | 2025-04-01 19:52:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:52.080447 | orchestrator | 2025-04-01 19:52:52 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:52.082112 | orchestrator | 2025-04-01 19:52:52 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:52.083931 | orchestrator | 2025-04-01 19:52:52 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:55.133830 | orchestrator | 2025-04-01 19:52:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:55.133957 | orchestrator | 2025-04-01 19:52:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:55.135784 | orchestrator | 2025-04-01 19:52:55 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:55.137881 | orchestrator | 2025-04-01 19:52:55 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:55.139195 | orchestrator | 2025-04-01 19:52:55 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:52:58.183697 | orchestrator | 2025-04-01 19:52:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:52:58.183853 | orchestrator | 2025-04-01 19:52:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:52:58.187103 | orchestrator | 2025-04-01 19:52:58 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:52:58.190809 | orchestrator | 2025-04-01 19:52:58 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state STARTED 2025-04-01 19:52:58.192617 | orchestrator | 2025-04-01 19:52:58 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:53:01.234255 | orchestrator | 2025-04-01 19:52:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:01.234373 | orchestrator | 2025-04-01 19:53:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:01.236602 | orchestrator | 2025-04-01 19:53:01 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:01.239853 | orchestrator | 2025-04-01 19:53:01.239888 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-01 19:53:01.239902 | orchestrator | 2025-04-01 19:53:01.239915 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-04-01 19:53:01.239928 | orchestrator | 2025-04-01 19:53:01.239940 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-01 19:53:01.239953 | orchestrator | Tuesday 01 April 2025 19:52:29 +0000 (0:00:00.516) 0:00:00.516 ********* 2025-04-01 19:53:01.239966 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-04-01 19:53:01.239980 | orchestrator | 2025-04-01 19:53:01.239992 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-01 19:53:01.240022 | orchestrator | Tuesday 01 April 2025 19:52:30 +0000 (0:00:00.221) 0:00:00.738 ********* 2025-04-01 19:53:01.240035 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:53:01.240048 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-01 19:53:01.240061 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-01 19:53:01.240074 | orchestrator | 2025-04-01 19:53:01.240086 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-01 19:53:01.240099 | orchestrator | Tuesday 01 April 2025 19:52:31 +0000 (0:00:01.009) 0:00:01.748 ********* 2025-04-01 19:53:01.240111 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-04-01 19:53:01.240124 | orchestrator | 2025-04-01 19:53:01.240136 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-01 19:53:01.240149 | orchestrator | Tuesday 01 April 2025 19:52:31 +0000 (0:00:00.250) 0:00:01.999 ********* 2025-04-01 19:53:01.240161 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240175 | orchestrator | 2025-04-01 19:53:01.240188 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-01 19:53:01.240200 | orchestrator | Tuesday 01 April 2025 19:52:31 +0000 (0:00:00.620) 0:00:02.619 ********* 2025-04-01 19:53:01.240235 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240249 | orchestrator | 2025-04-01 19:53:01.240262 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-01 19:53:01.240274 | orchestrator | Tuesday 01 April 2025 19:52:32 +0000 (0:00:00.125) 0:00:02.744 ********* 2025-04-01 19:53:01.240286 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240299 | orchestrator | 2025-04-01 19:53:01.240311 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-01 19:53:01.240324 | orchestrator | Tuesday 01 April 2025 19:52:32 +0000 (0:00:00.488) 0:00:03.233 ********* 2025-04-01 19:53:01.240336 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240349 | orchestrator | 2025-04-01 19:53:01.240440 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-01 19:53:01.240460 | orchestrator | Tuesday 01 April 2025 19:52:32 +0000 (0:00:00.181) 0:00:03.414 ********* 2025-04-01 19:53:01.240473 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240485 | orchestrator | 2025-04-01 19:53:01.240498 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-01 19:53:01.240511 | orchestrator | Tuesday 01 April 2025 19:52:32 +0000 (0:00:00.124) 0:00:03.539 ********* 2025-04-01 19:53:01.240523 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240535 | orchestrator | 2025-04-01 19:53:01.240548 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-01 19:53:01.240560 | orchestrator | Tuesday 01 April 2025 19:52:33 +0000 (0:00:00.164) 0:00:03.704 ********* 2025-04-01 19:53:01.240572 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.240585 | orchestrator | 2025-04-01 19:53:01.240598 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-01 19:53:01.240610 | orchestrator | Tuesday 01 April 2025 19:52:33 +0000 (0:00:00.155) 0:00:03.859 ********* 2025-04-01 19:53:01.240622 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240635 | orchestrator | 2025-04-01 19:53:01.240648 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-01 19:53:01.240682 | orchestrator | Tuesday 01 April 2025 19:52:33 +0000 (0:00:00.322) 0:00:04.182 ********* 2025-04-01 19:53:01.240696 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:53:01.240708 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:53:01.240721 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:53:01.240733 | orchestrator | 2025-04-01 19:53:01.240815 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-01 19:53:01.240828 | orchestrator | Tuesday 01 April 2025 19:52:34 +0000 (0:00:01.089) 0:00:05.271 ********* 2025-04-01 19:53:01.240840 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.240853 | orchestrator | 2025-04-01 19:53:01.240865 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-01 19:53:01.240877 | orchestrator | Tuesday 01 April 2025 19:52:34 +0000 (0:00:00.279) 0:00:05.551 ********* 2025-04-01 19:53:01.240890 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:53:01.240902 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:53:01.240915 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:53:01.240927 | orchestrator | 2025-04-01 19:53:01.240940 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-01 19:53:01.240959 | orchestrator | Tuesday 01 April 2025 19:52:36 +0000 (0:00:02.015) 0:00:07.566 ********* 2025-04-01 19:53:01.240972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:53:01.240985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:53:01.240997 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:53:01.241010 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241032 | orchestrator | 2025-04-01 19:53:01.241045 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-01 19:53:01.241067 | orchestrator | Tuesday 01 April 2025 19:52:37 +0000 (0:00:00.471) 0:00:08.038 ********* 2025-04-01 19:53:01.241085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-01 19:53:01.241101 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-01 19:53:01.241113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-01 19:53:01.241126 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241138 | orchestrator | 2025-04-01 19:53:01.241151 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-01 19:53:01.241163 | orchestrator | Tuesday 01 April 2025 19:52:38 +0000 (0:00:00.878) 0:00:08.916 ********* 2025-04-01 19:53:01.241177 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:53:01.241191 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:53:01.241204 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-01 19:53:01.241217 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241229 | orchestrator | 2025-04-01 19:53:01.241242 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-01 19:53:01.241254 | orchestrator | Tuesday 01 April 2025 19:52:38 +0000 (0:00:00.176) 0:00:09.092 ********* 2025-04-01 19:53:01.241272 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '33d7feb55f5d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-01 19:52:35.609738', 'end': '2025-04-01 19:52:35.637926', 'delta': '0:00:00.028188', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33d7feb55f5d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-01 19:53:01.241289 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '528c834bfea5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-01 19:52:36.216633', 'end': '2025-04-01 19:52:36.239650', 'delta': '0:00:00.023017', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['528c834bfea5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-01 19:53:01.241321 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'a194d25c79cd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-01 19:52:36.777075', 'end': '2025-04-01 19:52:36.803942', 'delta': '0:00:00.026867', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a194d25c79cd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-01 19:53:01.241334 | orchestrator | 2025-04-01 19:53:01.241347 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-01 19:53:01.241360 | orchestrator | Tuesday 01 April 2025 19:52:38 +0000 (0:00:00.238) 0:00:09.330 ********* 2025-04-01 19:53:01.241372 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.241385 | orchestrator | 2025-04-01 19:53:01.241397 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-01 19:53:01.241409 | orchestrator | Tuesday 01 April 2025 19:52:39 +0000 (0:00:00.679) 0:00:10.010 ********* 2025-04-01 19:53:01.241422 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-04-01 19:53:01.241434 | orchestrator | 2025-04-01 19:53:01.241448 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-01 19:53:01.241462 | orchestrator | Tuesday 01 April 2025 19:52:40 +0000 (0:00:01.205) 0:00:11.215 ********* 2025-04-01 19:53:01.241477 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241491 | orchestrator | 2025-04-01 19:53:01.241505 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-01 19:53:01.241519 | orchestrator | Tuesday 01 April 2025 19:52:40 +0000 (0:00:00.158) 0:00:11.374 ********* 2025-04-01 19:53:01.241533 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241546 | orchestrator | 2025-04-01 19:53:01.241559 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-01 19:53:01.241572 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.255) 0:00:11.630 ********* 2025-04-01 19:53:01.241587 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241600 | orchestrator | 2025-04-01 19:53:01.241613 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-01 19:53:01.241627 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.149) 0:00:11.780 ********* 2025-04-01 19:53:01.241641 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.241655 | orchestrator | 2025-04-01 19:53:01.241685 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-01 19:53:01.241700 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.134) 0:00:11.914 ********* 2025-04-01 19:53:01.241713 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241727 | orchestrator | 2025-04-01 19:53:01.241741 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-01 19:53:01.241754 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.245) 0:00:12.159 ********* 2025-04-01 19:53:01.241768 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241782 | orchestrator | 2025-04-01 19:53:01.241797 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-01 19:53:01.241809 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.134) 0:00:12.293 ********* 2025-04-01 19:53:01.241822 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241841 | orchestrator | 2025-04-01 19:53:01.241854 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-01 19:53:01.241866 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.163) 0:00:12.457 ********* 2025-04-01 19:53:01.241879 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241892 | orchestrator | 2025-04-01 19:53:01.241904 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-01 19:53:01.241917 | orchestrator | Tuesday 01 April 2025 19:52:41 +0000 (0:00:00.137) 0:00:12.595 ********* 2025-04-01 19:53:01.241930 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.241950 | orchestrator | 2025-04-01 19:53:01.241962 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-01 19:53:01.241975 | orchestrator | Tuesday 01 April 2025 19:52:42 +0000 (0:00:00.345) 0:00:12.940 ********* 2025-04-01 19:53:01.241988 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242001 | orchestrator | 2025-04-01 19:53:01.242058 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-01 19:53:01.242080 | orchestrator | Tuesday 01 April 2025 19:52:42 +0000 (0:00:00.156) 0:00:13.097 ********* 2025-04-01 19:53:01.242094 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242107 | orchestrator | 2025-04-01 19:53:01.242120 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-01 19:53:01.242132 | orchestrator | Tuesday 01 April 2025 19:52:42 +0000 (0:00:00.136) 0:00:13.233 ********* 2025-04-01 19:53:01.242145 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242157 | orchestrator | 2025-04-01 19:53:01.242169 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-01 19:53:01.242182 | orchestrator | Tuesday 01 April 2025 19:52:42 +0000 (0:00:00.143) 0:00:13.376 ********* 2025-04-01 19:53:01.242195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-01 19:53:01.242330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part1', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part14', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part15', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part16', 'scsi-SQEMU_QEMU_HARDDISK_250a6be6-ee42-4653-b909-5b3edf0d7432-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:53:01.242347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ab8dfbad-f338-4768-a4e7-f4b333b69279', 'scsi-SQEMU_QEMU_HARDDISK_ab8dfbad-f338-4768-a4e7-f4b333b69279'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:53:01.242368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9b8ece6-9486-4a7c-9bf5-40c217f02d2d', 'scsi-SQEMU_QEMU_HARDDISK_a9b8ece6-9486-4a7c-9bf5-40c217f02d2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:53:01.242382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_75999ceb-501f-420c-8b43-800350cfb103', 'scsi-SQEMU_QEMU_HARDDISK_75999ceb-501f-420c-8b43-800350cfb103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:53:01.242396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-01-18-51-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-01 19:53:01.242409 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242422 | orchestrator | 2025-04-01 19:53:01.242435 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-01 19:53:01.242447 | orchestrator | Tuesday 01 April 2025 19:52:43 +0000 (0:00:00.319) 0:00:13.696 ********* 2025-04-01 19:53:01.242460 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242472 | orchestrator | 2025-04-01 19:53:01.242485 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-01 19:53:01.242497 | orchestrator | Tuesday 01 April 2025 19:52:43 +0000 (0:00:00.257) 0:00:13.953 ********* 2025-04-01 19:53:01.242510 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242522 | orchestrator | 2025-04-01 19:53:01.242535 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-01 19:53:01.242547 | orchestrator | Tuesday 01 April 2025 19:52:43 +0000 (0:00:00.141) 0:00:14.095 ********* 2025-04-01 19:53:01.242560 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242572 | orchestrator | 2025-04-01 19:53:01.242585 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-01 19:53:01.242597 | orchestrator | Tuesday 01 April 2025 19:52:43 +0000 (0:00:00.138) 0:00:14.233 ********* 2025-04-01 19:53:01.242615 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.242628 | orchestrator | 2025-04-01 19:53:01.242640 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-01 19:53:01.242652 | orchestrator | Tuesday 01 April 2025 19:52:44 +0000 (0:00:00.498) 0:00:14.732 ********* 2025-04-01 19:53:01.242691 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.242704 | orchestrator | 2025-04-01 19:53:01.242717 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-01 19:53:01.242729 | orchestrator | Tuesday 01 April 2025 19:52:44 +0000 (0:00:00.138) 0:00:14.870 ********* 2025-04-01 19:53:01.242742 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.242754 | orchestrator | 2025-04-01 19:53:01.242767 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-01 19:53:01.242779 | orchestrator | Tuesday 01 April 2025 19:52:44 +0000 (0:00:00.461) 0:00:15.332 ********* 2025-04-01 19:53:01.242798 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.242811 | orchestrator | 2025-04-01 19:53:01.242823 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-01 19:53:01.242836 | orchestrator | Tuesday 01 April 2025 19:52:45 +0000 (0:00:00.365) 0:00:15.698 ********* 2025-04-01 19:53:01.242848 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242861 | orchestrator | 2025-04-01 19:53:01.242873 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-01 19:53:01.242885 | orchestrator | Tuesday 01 April 2025 19:52:45 +0000 (0:00:00.268) 0:00:15.966 ********* 2025-04-01 19:53:01.242898 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242910 | orchestrator | 2025-04-01 19:53:01.242922 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-01 19:53:01.242935 | orchestrator | Tuesday 01 April 2025 19:52:45 +0000 (0:00:00.152) 0:00:16.119 ********* 2025-04-01 19:53:01.242947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:53:01.242960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:53:01.242972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:53:01.242985 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.242997 | orchestrator | 2025-04-01 19:53:01.243010 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-01 19:53:01.243022 | orchestrator | Tuesday 01 April 2025 19:52:46 +0000 (0:00:00.584) 0:00:16.703 ********* 2025-04-01 19:53:01.243035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:53:01.243047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:53:01.243059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:53:01.243072 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.243084 | orchestrator | 2025-04-01 19:53:01.243097 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-01 19:53:01.243109 | orchestrator | Tuesday 01 April 2025 19:52:46 +0000 (0:00:00.555) 0:00:17.259 ********* 2025-04-01 19:53:01.243122 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:53:01.243135 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-01 19:53:01.243147 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-01 19:53:01.243159 | orchestrator | 2025-04-01 19:53:01.243172 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-01 19:53:01.243184 | orchestrator | Tuesday 01 April 2025 19:52:47 +0000 (0:00:01.263) 0:00:18.522 ********* 2025-04-01 19:53:01.243197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:53:01.243209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:53:01.243221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:53:01.243234 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.243246 | orchestrator | 2025-04-01 19:53:01.243259 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-01 19:53:01.243276 | orchestrator | Tuesday 01 April 2025 19:52:48 +0000 (0:00:00.224) 0:00:18.747 ********* 2025-04-01 19:53:01.243289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-01 19:53:01.243301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-01 19:53:01.243314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-01 19:53:01.243326 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.243339 | orchestrator | 2025-04-01 19:53:01.243351 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-01 19:53:01.243364 | orchestrator | Tuesday 01 April 2025 19:52:48 +0000 (0:00:00.254) 0:00:19.002 ********* 2025-04-01 19:53:01.243376 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-01 19:53:01.243389 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-01 19:53:01.243413 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-01 19:53:01.243426 | orchestrator | 2025-04-01 19:53:01.243438 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-01 19:53:01.243451 | orchestrator | Tuesday 01 April 2025 19:52:48 +0000 (0:00:00.216) 0:00:19.218 ********* 2025-04-01 19:53:01.243463 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.243476 | orchestrator | 2025-04-01 19:53:01.243488 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-01 19:53:01.243501 | orchestrator | Tuesday 01 April 2025 19:52:48 +0000 (0:00:00.368) 0:00:19.587 ********* 2025-04-01 19:53:01.243518 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:01.243531 | orchestrator | 2025-04-01 19:53:01.243544 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-01 19:53:01.243556 | orchestrator | Tuesday 01 April 2025 19:52:49 +0000 (0:00:00.128) 0:00:19.716 ********* 2025-04-01 19:53:01.243569 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:53:01.243586 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:53:01.243599 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:53:01.243611 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-01 19:53:01.243624 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-01 19:53:01.243636 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-01 19:53:01.243649 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-01 19:53:01.243677 | orchestrator | 2025-04-01 19:53:01.243691 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-01 19:53:01.243703 | orchestrator | Tuesday 01 April 2025 19:52:50 +0000 (0:00:01.047) 0:00:20.764 ********* 2025-04-01 19:53:01.243716 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-01 19:53:01.243728 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-01 19:53:01.243741 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-01 19:53:01.243754 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-01 19:53:01.243766 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-01 19:53:01.243778 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-01 19:53:01.243791 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-01 19:53:01.243803 | orchestrator | 2025-04-01 19:53:01.243816 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-04-01 19:53:01.243828 | orchestrator | Tuesday 01 April 2025 19:52:51 +0000 (0:00:01.755) 0:00:22.519 ********* 2025-04-01 19:53:01.243840 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:01.243853 | orchestrator | 2025-04-01 19:53:01.243865 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-04-01 19:53:01.243878 | orchestrator | Tuesday 01 April 2025 19:52:52 +0000 (0:00:00.541) 0:00:23.061 ********* 2025-04-01 19:53:01.243890 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:53:01.243903 | orchestrator | 2025-04-01 19:53:01.243915 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-04-01 19:53:01.243928 | orchestrator | Tuesday 01 April 2025 19:52:53 +0000 (0:00:00.628) 0:00:23.690 ********* 2025-04-01 19:53:01.243941 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-04-01 19:53:01.243953 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-04-01 19:53:01.243966 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-04-01 19:53:01.243989 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-04-01 19:53:01.244002 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-04-01 19:53:01.244014 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-04-01 19:53:01.244027 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-04-01 19:53:01.244039 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-04-01 19:53:01.244052 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-04-01 19:53:01.244064 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-04-01 19:53:01.244077 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-04-01 19:53:01.244089 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-04-01 19:53:01.244102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-04-01 19:53:01.244114 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-04-01 19:53:01.244126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-04-01 19:53:01.244139 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-04-01 19:53:01.244151 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-04-01 19:53:01.244163 | orchestrator | 2025-04-01 19:53:01.244176 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:53:01.244188 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-01 19:53:01.244202 | orchestrator | 2025-04-01 19:53:01.244215 | orchestrator | 2025-04-01 19:53:01.244227 | orchestrator | 2025-04-01 19:53:01.244239 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:53:01.244252 | orchestrator | Tuesday 01 April 2025 19:52:58 +0000 (0:00:05.921) 0:00:29.611 ********* 2025-04-01 19:53:01.244264 | orchestrator | =============================================================================== 2025-04-01 19:53:01.244277 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.92s 2025-04-01 19:53:01.244290 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.02s 2025-04-01 19:53:01.244302 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.76s 2025-04-01 19:53:01.244319 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.26s 2025-04-01 19:53:04.280304 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.21s 2025-04-01 19:53:04.280425 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 1.09s 2025-04-01 19:53:04.280443 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.05s 2025-04-01 19:53:04.280458 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 1.01s 2025-04-01 19:53:04.280473 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.88s 2025-04-01 19:53:04.280487 | orchestrator | ceph-facts : set_fact _container_exec_cmd ------------------------------- 0.68s 2025-04-01 19:53:04.280501 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.63s 2025-04-01 19:53:04.280515 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.62s 2025-04-01 19:53:04.280530 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.58s 2025-04-01 19:53:04.280544 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.56s 2025-04-01 19:53:04.280558 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.54s 2025-04-01 19:53:04.280615 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.50s 2025-04-01 19:53:04.280631 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.49s 2025-04-01 19:53:04.280646 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.47s 2025-04-01 19:53:04.280700 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.46s 2025-04-01 19:53:04.280716 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.37s 2025-04-01 19:53:04.280731 | orchestrator | 2025-04-01 19:53:01 | INFO  | Task 2608a822-b2c0-44b4-8983-93b8d9e29ea4 is in state SUCCESS 2025-04-01 19:53:04.280746 | orchestrator | 2025-04-01 19:53:01 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state STARTED 2025-04-01 19:53:04.280760 | orchestrator | 2025-04-01 19:53:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:04.280792 | orchestrator | 2025-04-01 19:53:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:04.283890 | orchestrator | 2025-04-01 19:53:04 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:04.287087 | orchestrator | 2025-04-01 19:53:04 | INFO  | Task 0cbfb97a-0b77-4bcd-a8c0-9272eacfb8c7 is in state SUCCESS 2025-04-01 19:53:04.287408 | orchestrator | 2025-04-01 19:53:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:07.337651 | orchestrator | 2025-04-01 19:53:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:07.338255 | orchestrator | 2025-04-01 19:53:07 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:07.339393 | orchestrator | 2025-04-01 19:53:07 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:10.403163 | orchestrator | 2025-04-01 19:53:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:10.403309 | orchestrator | 2025-04-01 19:53:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:10.404341 | orchestrator | 2025-04-01 19:53:10 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:10.406971 | orchestrator | 2025-04-01 19:53:10 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:13.464895 | orchestrator | 2025-04-01 19:53:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:13.465031 | orchestrator | 2025-04-01 19:53:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:13.469078 | orchestrator | 2025-04-01 19:53:13 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:16.516022 | orchestrator | 2025-04-01 19:53:13 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:16.516157 | orchestrator | 2025-04-01 19:53:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:16.516194 | orchestrator | 2025-04-01 19:53:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:16.517165 | orchestrator | 2025-04-01 19:53:16 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:16.518182 | orchestrator | 2025-04-01 19:53:16 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:19.569167 | orchestrator | 2025-04-01 19:53:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:19.569347 | orchestrator | 2025-04-01 19:53:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:19.569852 | orchestrator | 2025-04-01 19:53:19 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:19.571042 | orchestrator | 2025-04-01 19:53:19 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:22.613640 | orchestrator | 2025-04-01 19:53:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:22.613833 | orchestrator | 2025-04-01 19:53:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:22.617482 | orchestrator | 2025-04-01 19:53:22 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:22.618917 | orchestrator | 2025-04-01 19:53:22 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:25.675440 | orchestrator | 2025-04-01 19:53:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:25.675574 | orchestrator | 2025-04-01 19:53:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:25.677532 | orchestrator | 2025-04-01 19:53:25 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:25.678885 | orchestrator | 2025-04-01 19:53:25 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:28.730717 | orchestrator | 2025-04-01 19:53:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:28.730857 | orchestrator | 2025-04-01 19:53:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:28.733038 | orchestrator | 2025-04-01 19:53:28 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:28.734793 | orchestrator | 2025-04-01 19:53:28 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:31.788796 | orchestrator | 2025-04-01 19:53:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:31.788931 | orchestrator | 2025-04-01 19:53:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:31.791227 | orchestrator | 2025-04-01 19:53:31 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:31.794685 | orchestrator | 2025-04-01 19:53:31 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:34.851254 | orchestrator | 2025-04-01 19:53:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:34.851397 | orchestrator | 2025-04-01 19:53:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:34.854530 | orchestrator | 2025-04-01 19:53:34 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state STARTED 2025-04-01 19:53:34.856283 | orchestrator | 2025-04-01 19:53:34 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:37.895059 | orchestrator | 2025-04-01 19:53:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:37.895195 | orchestrator | 2025-04-01 19:53:37 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:37.895556 | orchestrator | 2025-04-01 19:53:37 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:37.895583 | orchestrator | 2025-04-01 19:53:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:37.895604 | orchestrator | 2025-04-01 19:53:37 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:37.896131 | orchestrator | 2025-04-01 19:53:37 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:37.899970 | orchestrator | 2025-04-01 19:53:37.900082 | orchestrator | 2025-04-01 19:53:37.900099 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-04-01 19:53:37.900142 | orchestrator | 2025-04-01 19:53:37.900157 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-04-01 19:53:37.900172 | orchestrator | Tuesday 01 April 2025 19:52:20 +0000 (0:00:00.149) 0:00:00.149 ********* 2025-04-01 19:53:37.900186 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-01 19:53:37.900202 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-01 19:53:37.900216 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-01 19:53:37.900230 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-01 19:53:37.900244 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-01 19:53:37.900258 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-01 19:53:37.900272 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-01 19:53:37.900286 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-01 19:53:37.900300 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-01 19:53:37.900314 | orchestrator | 2025-04-01 19:53:37.900328 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-04-01 19:53:37.900342 | orchestrator | Tuesday 01 April 2025 19:52:23 +0000 (0:00:03.290) 0:00:03.439 ********* 2025-04-01 19:53:37.900356 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-01 19:53:37.900370 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-01 19:53:37.900384 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-01 19:53:37.900397 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-01 19:53:37.900411 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-01 19:53:37.900425 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-01 19:53:37.900439 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-01 19:53:37.900453 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-01 19:53:37.900467 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-01 19:53:37.900481 | orchestrator | 2025-04-01 19:53:37.900495 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-04-01 19:53:37.900509 | orchestrator | Tuesday 01 April 2025 19:52:23 +0000 (0:00:00.293) 0:00:03.733 ********* 2025-04-01 19:53:37.900523 | orchestrator | ok: [testbed-manager] => { 2025-04-01 19:53:37.900540 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-04-01 19:53:37.900557 | orchestrator | } 2025-04-01 19:53:37.900572 | orchestrator | 2025-04-01 19:53:37.900588 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-04-01 19:53:37.900603 | orchestrator | Tuesday 01 April 2025 19:52:23 +0000 (0:00:00.186) 0:00:03.919 ********* 2025-04-01 19:53:37.900618 | orchestrator | changed: [testbed-manager] 2025-04-01 19:53:37.900634 | orchestrator | 2025-04-01 19:53:37.900664 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-04-01 19:53:37.900703 | orchestrator | Tuesday 01 April 2025 19:52:59 +0000 (0:00:35.828) 0:00:39.748 ********* 2025-04-01 19:53:37.900720 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-04-01 19:53:37.900737 | orchestrator | 2025-04-01 19:53:37.900752 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-04-01 19:53:37.900767 | orchestrator | Tuesday 01 April 2025 19:53:00 +0000 (0:00:00.616) 0:00:40.364 ********* 2025-04-01 19:53:37.900794 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-04-01 19:53:37.900811 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-04-01 19:53:37.900827 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-04-01 19:53:37.900843 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-04-01 19:53:37.900859 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-04-01 19:53:37.900886 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-04-01 19:53:37.900903 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-04-01 19:53:37.900919 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-04-01 19:53:37.900933 | orchestrator | 2025-04-01 19:53:37.900947 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-04-01 19:53:37.900961 | orchestrator | Tuesday 01 April 2025 19:53:03 +0000 (0:00:03.201) 0:00:43.566 ********* 2025-04-01 19:53:37.900976 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:53:37.900990 | orchestrator | 2025-04-01 19:53:37.901004 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:53:37.901019 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:53:37.901034 | orchestrator | 2025-04-01 19:53:37.901049 | orchestrator | Tuesday 01 April 2025 19:53:03 +0000 (0:00:00.030) 0:00:43.596 ********* 2025-04-01 19:53:37.901063 | orchestrator | =============================================================================== 2025-04-01 19:53:37.901077 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 35.83s 2025-04-01 19:53:37.901091 | orchestrator | Check ceph keys --------------------------------------------------------- 3.29s 2025-04-01 19:53:37.901105 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.20s 2025-04-01 19:53:37.901119 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.62s 2025-04-01 19:53:37.901133 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.29s 2025-04-01 19:53:37.901147 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.19s 2025-04-01 19:53:37.901161 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-04-01 19:53:37.901175 | orchestrator | 2025-04-01 19:53:37.901189 | orchestrator | 2025-04-01 19:53:37.901203 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:53:37.901217 | orchestrator | 2025-04-01 19:53:37.901231 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:53:37.901245 | orchestrator | Tuesday 01 April 2025 19:50:56 +0000 (0:00:00.327) 0:00:00.327 ********* 2025-04-01 19:53:37.901259 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.901274 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:53:37.901289 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:53:37.901310 | orchestrator | 2025-04-01 19:53:37.901325 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:53:37.901339 | orchestrator | Tuesday 01 April 2025 19:50:56 +0000 (0:00:00.456) 0:00:00.784 ********* 2025-04-01 19:53:37.901353 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-01 19:53:37.901367 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-01 19:53:37.901382 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-01 19:53:37.901396 | orchestrator | 2025-04-01 19:53:37.901410 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-04-01 19:53:37.901424 | orchestrator | 2025-04-01 19:53:37.901438 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-01 19:53:37.901452 | orchestrator | Tuesday 01 April 2025 19:50:57 +0000 (0:00:00.328) 0:00:01.113 ********* 2025-04-01 19:53:37.901466 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:53:37.901481 | orchestrator | 2025-04-01 19:53:37.901495 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-04-01 19:53:37.901509 | orchestrator | Tuesday 01 April 2025 19:50:57 +0000 (0:00:00.816) 0:00:01.930 ********* 2025-04-01 19:53:37.901527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.901556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.901573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.901597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.901614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.901628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.901651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.901683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.901699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.901722 | orchestrator | 2025-04-01 19:53:37.901737 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-04-01 19:53:37.901751 | orchestrator | Tuesday 01 April 2025 19:51:00 +0000 (0:00:02.325) 0:00:04.255 ********* 2025-04-01 19:53:37.901766 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-04-01 19:53:37.901780 | orchestrator | 2025-04-01 19:53:37.901794 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-04-01 19:53:37.901808 | orchestrator | Tuesday 01 April 2025 19:51:00 +0000 (0:00:00.584) 0:00:04.839 ********* 2025-04-01 19:53:37.901823 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.901837 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:53:37.901851 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:53:37.901865 | orchestrator | 2025-04-01 19:53:37.901880 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-04-01 19:53:37.901894 | orchestrator | Tuesday 01 April 2025 19:51:01 +0000 (0:00:00.489) 0:00:05.329 ********* 2025-04-01 19:53:37.901908 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:53:37.901922 | orchestrator | 2025-04-01 19:53:37.901942 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-01 19:53:37.901956 | orchestrator | Tuesday 01 April 2025 19:51:01 +0000 (0:00:00.409) 0:00:05.738 ********* 2025-04-01 19:53:37.901971 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:53:37.901985 | orchestrator | 2025-04-01 19:53:37.901999 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-04-01 19:53:37.902013 | orchestrator | Tuesday 01 April 2025 19:51:02 +0000 (0:00:00.746) 0:00:06.485 ********* 2025-04-01 19:53:37.902077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.902103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.902128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.902144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.902159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.902174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.902195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.902210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.902233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.902247 | orchestrator | 2025-04-01 19:53:37.902261 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-04-01 19:53:37.902276 | orchestrator | Tuesday 01 April 2025 19:51:05 +0000 (0:00:03.543) 0:00:10.029 ********* 2025-04-01 19:53:37.902291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:53:37.902307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.902322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:53:37.902336 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.902358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:53:37.902381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.902397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:53:37.902411 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.902426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:53:37.902442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.902464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:53:37.902485 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.902500 | orchestrator | 2025-04-01 19:53:37.902514 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-04-01 19:53:37.902528 | orchestrator | Tuesday 01 April 2025 19:51:06 +0000 (0:00:00.799) 0:00:10.829 ********* 2025-04-01 19:53:37.902543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:53:37.902558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.902573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:53:37.902587 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.902602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:53:37.902628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:22025-04-01 19:53:37 | INFO  | Task 37dcecd1-e8db-4338-a4bb-b24e6dda4943 is in state SUCCESS 2025-04-01 19:53:37.903618 | orchestrator | 5.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.903792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:53:37.903818 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.903837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-01 19:53:37.903856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.903871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-01 19:53:37.903886 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.903901 | orchestrator | 2025-04-01 19:53:37.903916 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-04-01 19:53:37.903961 | orchestrator | Tuesday 01 April 2025 19:51:07 +0000 (0:00:01.119) 0:00:11.948 ********* 2025-04-01 19:53:37.903991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904160 | orchestrator | 2025-04-01 19:53:37.904176 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-04-01 19:53:37.904192 | orchestrator | Tuesday 01 April 2025 19:51:11 +0000 (0:00:03.536) 0:00:15.485 ********* 2025-04-01 19:53:37.904208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.904266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.904299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.904339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904395 | orchestrator | 2025-04-01 19:53:37.904411 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-04-01 19:53:37.904426 | orchestrator | Tuesday 01 April 2025 19:51:18 +0000 (0:00:07.409) 0:00:22.894 ********* 2025-04-01 19:53:37.904440 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.904455 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:53:37.904469 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:53:37.904483 | orchestrator | 2025-04-01 19:53:37.904497 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-04-01 19:53:37.904511 | orchestrator | Tuesday 01 April 2025 19:51:21 +0000 (0:00:02.362) 0:00:25.256 ********* 2025-04-01 19:53:37.904525 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.904539 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.904554 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.904568 | orchestrator | 2025-04-01 19:53:37.904582 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-04-01 19:53:37.904596 | orchestrator | Tuesday 01 April 2025 19:51:22 +0000 (0:00:01.011) 0:00:26.268 ********* 2025-04-01 19:53:37.904610 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.904624 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.904638 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.904651 | orchestrator | 2025-04-01 19:53:37.904687 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-04-01 19:53:37.904703 | orchestrator | Tuesday 01 April 2025 19:51:22 +0000 (0:00:00.569) 0:00:26.837 ********* 2025-04-01 19:53:37.904717 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.904731 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.904752 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.904766 | orchestrator | 2025-04-01 19:53:37.904780 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-04-01 19:53:37.904795 | orchestrator | Tuesday 01 April 2025 19:51:23 +0000 (0:00:00.474) 0:00:27.312 ********* 2025-04-01 19:53:37.904809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.904866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.904881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.904904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-01 19:53:37.904919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.904971 | orchestrator | 2025-04-01 19:53:37.904985 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-01 19:53:37.905000 | orchestrator | Tuesday 01 April 2025 19:51:26 +0000 (0:00:02.776) 0:00:30.088 ********* 2025-04-01 19:53:37.905013 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.905048 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.905062 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.905077 | orchestrator | 2025-04-01 19:53:37.905091 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-04-01 19:53:37.905105 | orchestrator | Tuesday 01 April 2025 19:51:26 +0000 (0:00:00.344) 0:00:30.433 ********* 2025-04-01 19:53:37.905119 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-01 19:53:37.905143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-01 19:53:37.905158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-01 19:53:37.905172 | orchestrator | 2025-04-01 19:53:37.905186 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-04-01 19:53:37.905201 | orchestrator | Tuesday 01 April 2025 19:51:28 +0000 (0:00:01.954) 0:00:32.387 ********* 2025-04-01 19:53:37.905215 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:53:37.905229 | orchestrator | 2025-04-01 19:53:37.905243 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-04-01 19:53:37.905257 | orchestrator | Tuesday 01 April 2025 19:51:29 +0000 (0:00:00.815) 0:00:33.203 ********* 2025-04-01 19:53:37.905271 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.905285 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.905299 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.905313 | orchestrator | 2025-04-01 19:53:37.905327 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-04-01 19:53:37.905341 | orchestrator | Tuesday 01 April 2025 19:51:30 +0000 (0:00:01.002) 0:00:34.205 ********* 2025-04-01 19:53:37.905355 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:53:37.905369 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-01 19:53:37.905383 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-01 19:53:37.905397 | orchestrator | 2025-04-01 19:53:37.905417 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-04-01 19:53:37.905432 | orchestrator | Tuesday 01 April 2025 19:51:31 +0000 (0:00:01.430) 0:00:35.636 ********* 2025-04-01 19:53:37.905446 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.905462 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:53:37.905476 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:53:37.905489 | orchestrator | 2025-04-01 19:53:37.905504 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-04-01 19:53:37.905518 | orchestrator | Tuesday 01 April 2025 19:51:31 +0000 (0:00:00.392) 0:00:36.028 ********* 2025-04-01 19:53:37.905532 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-01 19:53:37.905546 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-01 19:53:37.905560 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-01 19:53:37.905574 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-01 19:53:37.905588 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-01 19:53:37.905602 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-01 19:53:37.905616 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-01 19:53:37.905630 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-01 19:53:37.905644 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-01 19:53:37.905658 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-01 19:53:37.905697 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-01 19:53:37.905719 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-01 19:53:37.905739 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-01 19:53:37.905753 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-01 19:53:37.905777 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-01 19:53:37.905791 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-01 19:53:37.905805 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-01 19:53:37.905819 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-01 19:53:37.905833 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-01 19:53:37.905847 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-01 19:53:37.905861 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-01 19:53:37.905875 | orchestrator | 2025-04-01 19:53:37.905889 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-04-01 19:53:37.905904 | orchestrator | Tuesday 01 April 2025 19:51:44 +0000 (0:00:12.214) 0:00:48.242 ********* 2025-04-01 19:53:37.905918 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-01 19:53:37.905932 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-01 19:53:37.905946 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-01 19:53:37.905960 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-01 19:53:37.905974 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-01 19:53:37.905988 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-01 19:53:37.906002 | orchestrator | 2025-04-01 19:53:37.906061 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-04-01 19:53:37.906080 | orchestrator | Tuesday 01 April 2025 19:51:47 +0000 (0:00:03.560) 0:00:51.802 ********* 2025-04-01 19:53:37.906095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.906111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.906145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-01 19:53:37.906162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.906177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.906192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-01 19:53:37.906207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.906221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.906248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-01 19:53:37.906264 | orchestrator | 2025-04-01 19:53:37.906278 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-01 19:53:37.906292 | orchestrator | Tuesday 01 April 2025 19:51:51 +0000 (0:00:03.310) 0:00:55.113 ********* 2025-04-01 19:53:37.906306 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.906320 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.906334 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.906348 | orchestrator | 2025-04-01 19:53:37.906363 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-04-01 19:53:37.906377 | orchestrator | Tuesday 01 April 2025 19:51:51 +0000 (0:00:00.306) 0:00:55.420 ********* 2025-04-01 19:53:37.906391 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.906405 | orchestrator | 2025-04-01 19:53:37.906419 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-04-01 19:53:37.906433 | orchestrator | Tuesday 01 April 2025 19:51:54 +0000 (0:00:03.273) 0:00:58.694 ********* 2025-04-01 19:53:37.906447 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.906461 | orchestrator | 2025-04-01 19:53:37.906475 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-04-01 19:53:37.906489 | orchestrator | Tuesday 01 April 2025 19:51:57 +0000 (0:00:02.557) 0:01:01.251 ********* 2025-04-01 19:53:37.906503 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:53:37.906517 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:53:37.906531 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.906545 | orchestrator | 2025-04-01 19:53:37.906559 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-04-01 19:53:37.906573 | orchestrator | Tuesday 01 April 2025 19:51:58 +0000 (0:00:01.003) 0:01:02.255 ********* 2025-04-01 19:53:37.906587 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.906601 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:53:37.906615 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:53:37.906629 | orchestrator | 2025-04-01 19:53:37.906643 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-04-01 19:53:37.906663 | orchestrator | Tuesday 01 April 2025 19:51:58 +0000 (0:00:00.354) 0:01:02.610 ********* 2025-04-01 19:53:37.906735 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.906750 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.906765 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.906779 | orchestrator | 2025-04-01 19:53:37.906793 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-04-01 19:53:37.906808 | orchestrator | Tuesday 01 April 2025 19:51:59 +0000 (0:00:00.859) 0:01:03.469 ********* 2025-04-01 19:53:37.906822 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.906836 | orchestrator | 2025-04-01 19:53:37.906850 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-04-01 19:53:37.906864 | orchestrator | Tuesday 01 April 2025 19:52:10 +0000 (0:00:11.193) 0:01:14.663 ********* 2025-04-01 19:53:37.906879 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.906900 | orchestrator | 2025-04-01 19:53:37.906915 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-01 19:53:37.906929 | orchestrator | Tuesday 01 April 2025 19:52:18 +0000 (0:00:07.829) 0:01:22.492 ********* 2025-04-01 19:53:37.906943 | orchestrator | 2025-04-01 19:53:37.906957 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-01 19:53:37.906971 | orchestrator | Tuesday 01 April 2025 19:52:18 +0000 (0:00:00.069) 0:01:22.562 ********* 2025-04-01 19:53:37.906985 | orchestrator | 2025-04-01 19:53:37.906999 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-01 19:53:37.907013 | orchestrator | Tuesday 01 April 2025 19:52:18 +0000 (0:00:00.058) 0:01:22.621 ********* 2025-04-01 19:53:37.907028 | orchestrator | 2025-04-01 19:53:37.907042 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-04-01 19:53:37.907055 | orchestrator | Tuesday 01 April 2025 19:52:18 +0000 (0:00:00.057) 0:01:22.678 ********* 2025-04-01 19:53:37.907069 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.907083 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:53:37.907097 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:53:37.907111 | orchestrator | 2025-04-01 19:53:37.907125 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-04-01 19:53:37.907140 | orchestrator | Tuesday 01 April 2025 19:52:29 +0000 (0:00:11.236) 0:01:33.915 ********* 2025-04-01 19:53:37.907154 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.907168 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:53:37.907182 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:53:37.907196 | orchestrator | 2025-04-01 19:53:37.907210 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-04-01 19:53:37.907224 | orchestrator | Tuesday 01 April 2025 19:52:39 +0000 (0:00:09.401) 0:01:43.316 ********* 2025-04-01 19:53:37.907238 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:53:37.907251 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.907266 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:53:37.907280 | orchestrator | 2025-04-01 19:53:37.907293 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-01 19:53:37.907308 | orchestrator | Tuesday 01 April 2025 19:52:48 +0000 (0:00:09.610) 0:01:52.927 ********* 2025-04-01 19:53:37.907322 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:53:37.907336 | orchestrator | 2025-04-01 19:53:37.907350 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-04-01 19:53:37.907370 | orchestrator | Tuesday 01 April 2025 19:52:49 +0000 (0:00:00.946) 0:01:53.873 ********* 2025-04-01 19:53:37.907385 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:53:37.907399 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.907413 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:53:37.907427 | orchestrator | 2025-04-01 19:53:37.907441 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-04-01 19:53:37.907455 | orchestrator | Tuesday 01 April 2025 19:52:51 +0000 (0:00:01.260) 0:01:55.134 ********* 2025-04-01 19:53:37.907476 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:53:37.907491 | orchestrator | 2025-04-01 19:53:37.907505 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-04-01 19:53:37.907519 | orchestrator | Tuesday 01 April 2025 19:52:52 +0000 (0:00:01.574) 0:01:56.709 ********* 2025-04-01 19:53:37.907533 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-04-01 19:53:37.907547 | orchestrator | 2025-04-01 19:53:37.907561 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-04-01 19:53:37.907575 | orchestrator | Tuesday 01 April 2025 19:53:02 +0000 (0:00:09.352) 0:02:06.061 ********* 2025-04-01 19:53:37.907590 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-04-01 19:53:37.907604 | orchestrator | 2025-04-01 19:53:37.907618 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-04-01 19:53:37.907632 | orchestrator | Tuesday 01 April 2025 19:53:22 +0000 (0:00:20.749) 0:02:26.811 ********* 2025-04-01 19:53:37.907653 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-04-01 19:53:37.907685 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-04-01 19:53:37.907700 | orchestrator | 2025-04-01 19:53:37.907714 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-04-01 19:53:37.907728 | orchestrator | Tuesday 01 April 2025 19:53:30 +0000 (0:00:07.425) 0:02:34.236 ********* 2025-04-01 19:53:37.907742 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.907757 | orchestrator | 2025-04-01 19:53:37.907770 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-04-01 19:53:37.907793 | orchestrator | Tuesday 01 April 2025 19:53:30 +0000 (0:00:00.132) 0:02:34.369 ********* 2025-04-01 19:53:37.907808 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.907822 | orchestrator | 2025-04-01 19:53:37.907836 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-04-01 19:53:37.907850 | orchestrator | Tuesday 01 April 2025 19:53:30 +0000 (0:00:00.149) 0:02:34.519 ********* 2025-04-01 19:53:37.907864 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.907878 | orchestrator | 2025-04-01 19:53:37.907892 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-04-01 19:53:37.907906 | orchestrator | Tuesday 01 April 2025 19:53:30 +0000 (0:00:00.121) 0:02:34.640 ********* 2025-04-01 19:53:37.907920 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.907937 | orchestrator | 2025-04-01 19:53:37.907952 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-04-01 19:53:37.907966 | orchestrator | Tuesday 01 April 2025 19:53:31 +0000 (0:00:00.506) 0:02:35.147 ********* 2025-04-01 19:53:37.907981 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:53:37.907996 | orchestrator | 2025-04-01 19:53:37.908010 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-01 19:53:37.908024 | orchestrator | Tuesday 01 April 2025 19:53:34 +0000 (0:00:03.227) 0:02:38.375 ********* 2025-04-01 19:53:37.908037 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:53:37.908052 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:53:37.908066 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:53:37.908080 | orchestrator | 2025-04-01 19:53:37.908094 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:53:37.908108 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-01 19:53:37.908124 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-01 19:53:37.908138 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-01 19:53:37.908152 | orchestrator | 2025-04-01 19:53:37.908166 | orchestrator | 2025-04-01 19:53:37.908180 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:53:37.908195 | orchestrator | Tuesday 01 April 2025 19:53:34 +0000 (0:00:00.625) 0:02:39.000 ********* 2025-04-01 19:53:37.908208 | orchestrator | =============================================================================== 2025-04-01 19:53:37.908222 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.75s 2025-04-01 19:53:37.908236 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 12.21s 2025-04-01 19:53:37.908250 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 11.24s 2025-04-01 19:53:37.908264 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 11.19s 2025-04-01 19:53:37.908278 | orchestrator | keystone : Restart keystone container ----------------------------------- 9.61s 2025-04-01 19:53:37.908292 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.40s 2025-04-01 19:53:37.908313 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.35s 2025-04-01 19:53:37.908327 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 7.83s 2025-04-01 19:53:37.908341 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.43s 2025-04-01 19:53:37.908355 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.41s 2025-04-01 19:53:37.908374 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.56s 2025-04-01 19:53:40.931817 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.54s 2025-04-01 19:53:40.931948 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.54s 2025-04-01 19:53:40.931968 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.31s 2025-04-01 19:53:40.931983 | orchestrator | keystone : Creating keystone database ----------------------------------- 3.27s 2025-04-01 19:53:40.931997 | orchestrator | keystone : Creating default user role ----------------------------------- 3.23s 2025-04-01 19:53:40.932011 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.78s 2025-04-01 19:53:40.932026 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.56s 2025-04-01 19:53:40.932040 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.36s 2025-04-01 19:53:40.932072 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.33s 2025-04-01 19:53:40.932087 | orchestrator | 2025-04-01 19:53:37 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:40.932103 | orchestrator | 2025-04-01 19:53:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:40.932135 | orchestrator | 2025-04-01 19:53:40 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:40.934961 | orchestrator | 2025-04-01 19:53:40 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:40.936267 | orchestrator | 2025-04-01 19:53:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:40.937182 | orchestrator | 2025-04-01 19:53:40 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:40.938221 | orchestrator | 2025-04-01 19:53:40 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:40.939345 | orchestrator | 2025-04-01 19:53:40 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:43.997058 | orchestrator | 2025-04-01 19:53:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:43.997232 | orchestrator | 2025-04-01 19:53:43 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:44.003268 | orchestrator | 2025-04-01 19:53:44 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:44.006171 | orchestrator | 2025-04-01 19:53:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:44.009004 | orchestrator | 2025-04-01 19:53:44 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:44.013028 | orchestrator | 2025-04-01 19:53:44 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:44.014097 | orchestrator | 2025-04-01 19:53:44 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:44.014283 | orchestrator | 2025-04-01 19:53:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:47.056414 | orchestrator | 2025-04-01 19:53:47 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:47.059512 | orchestrator | 2025-04-01 19:53:47 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:47.060335 | orchestrator | 2025-04-01 19:53:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:47.060366 | orchestrator | 2025-04-01 19:53:47 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:47.062164 | orchestrator | 2025-04-01 19:53:47 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:47.063508 | orchestrator | 2025-04-01 19:53:47 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:47.066531 | orchestrator | 2025-04-01 19:53:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:50.117131 | orchestrator | 2025-04-01 19:53:50 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:50.118915 | orchestrator | 2025-04-01 19:53:50 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:50.122179 | orchestrator | 2025-04-01 19:53:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:50.123387 | orchestrator | 2025-04-01 19:53:50 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:50.124595 | orchestrator | 2025-04-01 19:53:50 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:50.125736 | orchestrator | 2025-04-01 19:53:50 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:53.166736 | orchestrator | 2025-04-01 19:53:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:53.166904 | orchestrator | 2025-04-01 19:53:53 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:53.169037 | orchestrator | 2025-04-01 19:53:53 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:53.170939 | orchestrator | 2025-04-01 19:53:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:53.172200 | orchestrator | 2025-04-01 19:53:53 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:53.173758 | orchestrator | 2025-04-01 19:53:53 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:53.175802 | orchestrator | 2025-04-01 19:53:53 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:53.175914 | orchestrator | 2025-04-01 19:53:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:56.229859 | orchestrator | 2025-04-01 19:53:56 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:56.231356 | orchestrator | 2025-04-01 19:53:56 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:56.233108 | orchestrator | 2025-04-01 19:53:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:56.236988 | orchestrator | 2025-04-01 19:53:56 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:56.238850 | orchestrator | 2025-04-01 19:53:56 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:56.241275 | orchestrator | 2025-04-01 19:53:56 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:59.305603 | orchestrator | 2025-04-01 19:53:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:53:59.305810 | orchestrator | 2025-04-01 19:53:59 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:53:59.307530 | orchestrator | 2025-04-01 19:53:59 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:53:59.310340 | orchestrator | 2025-04-01 19:53:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:53:59.312326 | orchestrator | 2025-04-01 19:53:59 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:53:59.315136 | orchestrator | 2025-04-01 19:53:59 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:53:59.316769 | orchestrator | 2025-04-01 19:53:59 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:53:59.317283 | orchestrator | 2025-04-01 19:53:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:02.360017 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:02.360978 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:02.364115 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:02.365748 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:02.367633 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:02.371491 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task 62c81a89-bc82-4e6c-928e-2ea896a20270 is in state STARTED 2025-04-01 19:54:02.378073 | orchestrator | 2025-04-01 19:54:02 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state STARTED 2025-04-01 19:54:05.447103 | orchestrator | 2025-04-01 19:54:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:05.447280 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:05.451405 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:05.453929 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:05.457006 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:05.460143 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:05.461978 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task 62c81a89-bc82-4e6c-928e-2ea896a20270 is in state STARTED 2025-04-01 19:54:05.464486 | orchestrator | 2025-04-01 19:54:05 | INFO  | Task 2e902718-8809-49d2-9f2f-fd78d1a9fad7 is in state SUCCESS 2025-04-01 19:54:08.532039 | orchestrator | 2025-04-01 19:54:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:08.532198 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:08.534772 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:08.535861 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:08.538108 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:08.539628 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:08.541155 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:08.545092 | orchestrator | 2025-04-01 19:54:08 | INFO  | Task 62c81a89-bc82-4e6c-928e-2ea896a20270 is in state STARTED 2025-04-01 19:54:11.595978 | orchestrator | 2025-04-01 19:54:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:11.596123 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:11.597356 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:11.599497 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:11.601872 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:11.602952 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:11.603958 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:11.606435 | orchestrator | 2025-04-01 19:54:11 | INFO  | Task 62c81a89-bc82-4e6c-928e-2ea896a20270 is in state STARTED 2025-04-01 19:54:14.692738 | orchestrator | 2025-04-01 19:54:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:14.692859 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:14.694174 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:14.696239 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:14.697716 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:14.699504 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:14.700846 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:14.701993 | orchestrator | 2025-04-01 19:54:14 | INFO  | Task 62c81a89-bc82-4e6c-928e-2ea896a20270 is in state SUCCESS 2025-04-01 19:54:14.702399 | orchestrator | 2025-04-01 19:54:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:17.750545 | orchestrator | 2025-04-01 19:54:17 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:17.754509 | orchestrator | 2025-04-01 19:54:17 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:17.756969 | orchestrator | 2025-04-01 19:54:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:17.758065 | orchestrator | 2025-04-01 19:54:17 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:17.761080 | orchestrator | 2025-04-01 19:54:17 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:17.761554 | orchestrator | 2025-04-01 19:54:17 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:17.761587 | orchestrator | 2025-04-01 19:54:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:20.806111 | orchestrator | 2025-04-01 19:54:20 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:20.808223 | orchestrator | 2025-04-01 19:54:20 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:20.809511 | orchestrator | 2025-04-01 19:54:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:20.813197 | orchestrator | 2025-04-01 19:54:20 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:20.814113 | orchestrator | 2025-04-01 19:54:20 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:20.814150 | orchestrator | 2025-04-01 19:54:20 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:23.855146 | orchestrator | 2025-04-01 19:54:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:23.855273 | orchestrator | 2025-04-01 19:54:23 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:23.857003 | orchestrator | 2025-04-01 19:54:23 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:23.859471 | orchestrator | 2025-04-01 19:54:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:23.859595 | orchestrator | 2025-04-01 19:54:23 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:23.860961 | orchestrator | 2025-04-01 19:54:23 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:23.863258 | orchestrator | 2025-04-01 19:54:23 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:26.908625 | orchestrator | 2025-04-01 19:54:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:26.908794 | orchestrator | 2025-04-01 19:54:26 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:26.911903 | orchestrator | 2025-04-01 19:54:26 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:26.912716 | orchestrator | 2025-04-01 19:54:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:26.913038 | orchestrator | 2025-04-01 19:54:26 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:26.914838 | orchestrator | 2025-04-01 19:54:26 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:26.915782 | orchestrator | 2025-04-01 19:54:26 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:29.970288 | orchestrator | 2025-04-01 19:54:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:29.970456 | orchestrator | 2025-04-01 19:54:29 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:29.971173 | orchestrator | 2025-04-01 19:54:29 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:29.971352 | orchestrator | 2025-04-01 19:54:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:29.971673 | orchestrator | 2025-04-01 19:54:29 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:29.972587 | orchestrator | 2025-04-01 19:54:29 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:29.972964 | orchestrator | 2025-04-01 19:54:29 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:33.049018 | orchestrator | 2025-04-01 19:54:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:33.049185 | orchestrator | 2025-04-01 19:54:33 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:36.098094 | orchestrator | 2025-04-01 19:54:33 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:36.098241 | orchestrator | 2025-04-01 19:54:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:36.098256 | orchestrator | 2025-04-01 19:54:33 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:36.098299 | orchestrator | 2025-04-01 19:54:33 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:36.098310 | orchestrator | 2025-04-01 19:54:33 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:36.098322 | orchestrator | 2025-04-01 19:54:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:36.098350 | orchestrator | 2025-04-01 19:54:36 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:36.098554 | orchestrator | 2025-04-01 19:54:36 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:36.098578 | orchestrator | 2025-04-01 19:54:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:36.101142 | orchestrator | 2025-04-01 19:54:36 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:36.103606 | orchestrator | 2025-04-01 19:54:36 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:36.103846 | orchestrator | 2025-04-01 19:54:36 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:36.103867 | orchestrator | 2025-04-01 19:54:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:39.131943 | orchestrator | 2025-04-01 19:54:39 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:39.132911 | orchestrator | 2025-04-01 19:54:39 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:39.134000 | orchestrator | 2025-04-01 19:54:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:39.134766 | orchestrator | 2025-04-01 19:54:39 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:39.135790 | orchestrator | 2025-04-01 19:54:39 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:39.136624 | orchestrator | 2025-04-01 19:54:39 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:39.137022 | orchestrator | 2025-04-01 19:54:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:42.202439 | orchestrator | 2025-04-01 19:54:42 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:42.202838 | orchestrator | 2025-04-01 19:54:42 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:42.203834 | orchestrator | 2025-04-01 19:54:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:42.204611 | orchestrator | 2025-04-01 19:54:42 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:42.208571 | orchestrator | 2025-04-01 19:54:42 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state STARTED 2025-04-01 19:54:45.251376 | orchestrator | 2025-04-01 19:54:42 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:45.251502 | orchestrator | 2025-04-01 19:54:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:45.251540 | orchestrator | 2025-04-01 19:54:45 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:45.252664 | orchestrator | 2025-04-01 19:54:45 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:45.260992 | orchestrator | 2025-04-01 19:54:45.261027 | orchestrator | 2025-04-01 19:54:45.261041 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-04-01 19:54:45.261056 | orchestrator | 2025-04-01 19:54:45.261070 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-04-01 19:54:45.261115 | orchestrator | Tuesday 01 April 2025 19:53:07 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-04-01 19:54:45.261130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-04-01 19:54:45.261162 | orchestrator | 2025-04-01 19:54:45.261177 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-04-01 19:54:45.261192 | orchestrator | Tuesday 01 April 2025 19:53:07 +0000 (0:00:00.234) 0:00:00.404 ********* 2025-04-01 19:54:45.261207 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-04-01 19:54:45.261222 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-04-01 19:54:45.261237 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-04-01 19:54:45.261252 | orchestrator | 2025-04-01 19:54:45.261266 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-04-01 19:54:45.261281 | orchestrator | Tuesday 01 April 2025 19:53:08 +0000 (0:00:01.251) 0:00:01.655 ********* 2025-04-01 19:54:45.261296 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-04-01 19:54:45.261311 | orchestrator | 2025-04-01 19:54:45.261325 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-04-01 19:54:45.261340 | orchestrator | Tuesday 01 April 2025 19:53:10 +0000 (0:00:01.258) 0:00:02.914 ********* 2025-04-01 19:54:45.261354 | orchestrator | changed: [testbed-manager] 2025-04-01 19:54:45.261378 | orchestrator | 2025-04-01 19:54:45.261393 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-04-01 19:54:45.261408 | orchestrator | Tuesday 01 April 2025 19:53:11 +0000 (0:00:01.023) 0:00:03.938 ********* 2025-04-01 19:54:45.261422 | orchestrator | changed: [testbed-manager] 2025-04-01 19:54:45.261443 | orchestrator | 2025-04-01 19:54:45.261457 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-04-01 19:54:45.261472 | orchestrator | Tuesday 01 April 2025 19:53:12 +0000 (0:00:01.159) 0:00:05.097 ********* 2025-04-01 19:54:45.261486 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-04-01 19:54:45.261501 | orchestrator | ok: [testbed-manager] 2025-04-01 19:54:45.261516 | orchestrator | 2025-04-01 19:54:45.261530 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-04-01 19:54:45.261545 | orchestrator | Tuesday 01 April 2025 19:53:54 +0000 (0:00:42.233) 0:00:47.331 ********* 2025-04-01 19:54:45.261559 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-04-01 19:54:45.261574 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-04-01 19:54:45.261589 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-04-01 19:54:45.261606 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-04-01 19:54:45.261622 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-04-01 19:54:45.261637 | orchestrator | 2025-04-01 19:54:45.261653 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-04-01 19:54:45.261670 | orchestrator | Tuesday 01 April 2025 19:53:58 +0000 (0:00:04.307) 0:00:51.638 ********* 2025-04-01 19:54:45.261712 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-04-01 19:54:45.261729 | orchestrator | 2025-04-01 19:54:45.261745 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-04-01 19:54:45.261760 | orchestrator | Tuesday 01 April 2025 19:53:59 +0000 (0:00:00.576) 0:00:52.215 ********* 2025-04-01 19:54:45.261776 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:54:45.261797 | orchestrator | 2025-04-01 19:54:45.261813 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-04-01 19:54:45.261829 | orchestrator | Tuesday 01 April 2025 19:53:59 +0000 (0:00:00.134) 0:00:52.350 ********* 2025-04-01 19:54:45.261844 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:54:45.261861 | orchestrator | 2025-04-01 19:54:45.261877 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-04-01 19:54:45.261901 | orchestrator | Tuesday 01 April 2025 19:53:59 +0000 (0:00:00.297) 0:00:52.647 ********* 2025-04-01 19:54:45.261918 | orchestrator | changed: [testbed-manager] 2025-04-01 19:54:45.261934 | orchestrator | 2025-04-01 19:54:45.261950 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-04-01 19:54:45.261965 | orchestrator | Tuesday 01 April 2025 19:54:01 +0000 (0:00:01.617) 0:00:54.264 ********* 2025-04-01 19:54:45.261979 | orchestrator | changed: [testbed-manager] 2025-04-01 19:54:45.261994 | orchestrator | 2025-04-01 19:54:45.262008 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-04-01 19:54:45.262072 | orchestrator | Tuesday 01 April 2025 19:54:02 +0000 (0:00:01.045) 0:00:55.310 ********* 2025-04-01 19:54:45.262088 | orchestrator | changed: [testbed-manager] 2025-04-01 19:54:45.262103 | orchestrator | 2025-04-01 19:54:45.262117 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-04-01 19:54:45.262132 | orchestrator | Tuesday 01 April 2025 19:54:03 +0000 (0:00:00.587) 0:00:55.897 ********* 2025-04-01 19:54:45.262146 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-04-01 19:54:45.262161 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-04-01 19:54:45.262176 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-04-01 19:54:45.262192 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-04-01 19:54:45.262206 | orchestrator | 2025-04-01 19:54:45.262221 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:54:45.262236 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-01 19:54:45.262252 | orchestrator | 2025-04-01 19:54:45.262285 | orchestrator | Tuesday 01 April 2025 19:54:04 +0000 (0:00:01.532) 0:00:57.429 ********* 2025-04-01 19:54:48.318366 | orchestrator | =============================================================================== 2025-04-01 19:54:48.318488 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.23s 2025-04-01 19:54:48.318507 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.31s 2025-04-01 19:54:48.318523 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.62s 2025-04-01 19:54:48.318537 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2025-04-01 19:54:48.318551 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.26s 2025-04-01 19:54:48.318566 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-04-01 19:54:48.318580 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.16s 2025-04-01 19:54:48.318594 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.05s 2025-04-01 19:54:48.318608 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2025-04-01 19:54:48.318622 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2025-04-01 19:54:48.318636 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.58s 2025-04-01 19:54:48.318651 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-04-01 19:54:48.318665 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-04-01 19:54:48.318725 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-04-01 19:54:48.318742 | orchestrator | 2025-04-01 19:54:48.318758 | orchestrator | None 2025-04-01 19:54:48.318884 | orchestrator | 2025-04-01 19:54:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:48.318900 | orchestrator | 2025-04-01 19:54:45 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:48.318915 | orchestrator | 2025-04-01 19:54:45 | INFO  | Task 8466a903-5a1d-41a6-ab8f-2583acebe28c is in state SUCCESS 2025-04-01 19:54:48.318929 | orchestrator | 2025-04-01 19:54:45 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:48.318968 | orchestrator | 2025-04-01 19:54:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:48.319001 | orchestrator | 2025-04-01 19:54:48 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:48.319558 | orchestrator | 2025-04-01 19:54:48 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:48.319591 | orchestrator | 2025-04-01 19:54:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:48.320425 | orchestrator | 2025-04-01 19:54:48 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:48.321206 | orchestrator | 2025-04-01 19:54:48 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:51.358407 | orchestrator | 2025-04-01 19:54:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:51.358545 | orchestrator | 2025-04-01 19:54:51 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:51.358836 | orchestrator | 2025-04-01 19:54:51 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:51.359349 | orchestrator | 2025-04-01 19:54:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:51.360172 | orchestrator | 2025-04-01 19:54:51 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:51.360915 | orchestrator | 2025-04-01 19:54:51 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:54.419174 | orchestrator | 2025-04-01 19:54:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:54.419329 | orchestrator | 2025-04-01 19:54:54 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:54.420291 | orchestrator | 2025-04-01 19:54:54 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:54.421110 | orchestrator | 2025-04-01 19:54:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:54.421140 | orchestrator | 2025-04-01 19:54:54 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:54.421924 | orchestrator | 2025-04-01 19:54:54 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:54:57.471542 | orchestrator | 2025-04-01 19:54:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:54:57.471733 | orchestrator | 2025-04-01 19:54:57 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:54:57.472846 | orchestrator | 2025-04-01 19:54:57 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:54:57.473395 | orchestrator | 2025-04-01 19:54:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:54:57.474205 | orchestrator | 2025-04-01 19:54:57 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:54:57.476119 | orchestrator | 2025-04-01 19:54:57 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:00.529338 | orchestrator | 2025-04-01 19:54:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:00.529469 | orchestrator | 2025-04-01 19:55:00 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:00.530374 | orchestrator | 2025-04-01 19:55:00 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:00.532214 | orchestrator | 2025-04-01 19:55:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:00.533015 | orchestrator | 2025-04-01 19:55:00 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:00.533817 | orchestrator | 2025-04-01 19:55:00 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:03.593324 | orchestrator | 2025-04-01 19:55:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:03.593505 | orchestrator | 2025-04-01 19:55:03 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:03.596599 | orchestrator | 2025-04-01 19:55:03 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:03.596999 | orchestrator | 2025-04-01 19:55:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:03.597747 | orchestrator | 2025-04-01 19:55:03 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:03.598387 | orchestrator | 2025-04-01 19:55:03 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:06.646579 | orchestrator | 2025-04-01 19:55:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:06.646797 | orchestrator | 2025-04-01 19:55:06 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:06.647491 | orchestrator | 2025-04-01 19:55:06 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:06.647525 | orchestrator | 2025-04-01 19:55:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:06.649771 | orchestrator | 2025-04-01 19:55:06 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:06.650449 | orchestrator | 2025-04-01 19:55:06 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:09.716096 | orchestrator | 2025-04-01 19:55:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:09.716268 | orchestrator | 2025-04-01 19:55:09 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:09.717430 | orchestrator | 2025-04-01 19:55:09 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:09.719367 | orchestrator | 2025-04-01 19:55:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:09.721245 | orchestrator | 2025-04-01 19:55:09 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:09.721937 | orchestrator | 2025-04-01 19:55:09 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:12.774321 | orchestrator | 2025-04-01 19:55:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:12.774499 | orchestrator | 2025-04-01 19:55:12 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:12.778589 | orchestrator | 2025-04-01 19:55:12 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:12.778629 | orchestrator | 2025-04-01 19:55:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:12.779871 | orchestrator | 2025-04-01 19:55:12 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:12.781468 | orchestrator | 2025-04-01 19:55:12 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:15.824908 | orchestrator | 2025-04-01 19:55:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:15.825049 | orchestrator | 2025-04-01 19:55:15 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:15.825523 | orchestrator | 2025-04-01 19:55:15 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:15.826121 | orchestrator | 2025-04-01 19:55:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:15.826890 | orchestrator | 2025-04-01 19:55:15 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:15.827945 | orchestrator | 2025-04-01 19:55:15 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:18.872265 | orchestrator | 2025-04-01 19:55:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:18.872427 | orchestrator | 2025-04-01 19:55:18 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:18.872901 | orchestrator | 2025-04-01 19:55:18 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:18.873781 | orchestrator | 2025-04-01 19:55:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:18.876793 | orchestrator | 2025-04-01 19:55:18 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:18.878920 | orchestrator | 2025-04-01 19:55:18 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:21.915558 | orchestrator | 2025-04-01 19:55:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:21.915759 | orchestrator | 2025-04-01 19:55:21 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:21.916536 | orchestrator | 2025-04-01 19:55:21 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:21.916571 | orchestrator | 2025-04-01 19:55:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:21.917387 | orchestrator | 2025-04-01 19:55:21 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:21.918425 | orchestrator | 2025-04-01 19:55:21 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:24.959318 | orchestrator | 2025-04-01 19:55:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:24.959454 | orchestrator | 2025-04-01 19:55:24 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:24.959850 | orchestrator | 2025-04-01 19:55:24 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:24.960651 | orchestrator | 2025-04-01 19:55:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:24.961016 | orchestrator | 2025-04-01 19:55:24 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:24.962058 | orchestrator | 2025-04-01 19:55:24 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:28.018638 | orchestrator | 2025-04-01 19:55:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:28.018974 | orchestrator | 2025-04-01 19:55:28 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:28.021194 | orchestrator | 2025-04-01 19:55:28 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:28.021249 | orchestrator | 2025-04-01 19:55:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:28.021844 | orchestrator | 2025-04-01 19:55:28 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:28.024041 | orchestrator | 2025-04-01 19:55:28 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:31.099165 | orchestrator | 2025-04-01 19:55:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:31.099326 | orchestrator | 2025-04-01 19:55:31 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:31.100429 | orchestrator | 2025-04-01 19:55:31 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:31.101127 | orchestrator | 2025-04-01 19:55:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:31.102089 | orchestrator | 2025-04-01 19:55:31 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:31.103518 | orchestrator | 2025-04-01 19:55:31 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:34.146924 | orchestrator | 2025-04-01 19:55:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:34.147094 | orchestrator | 2025-04-01 19:55:34 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:34.148270 | orchestrator | 2025-04-01 19:55:34 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:34.149378 | orchestrator | 2025-04-01 19:55:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:34.153340 | orchestrator | 2025-04-01 19:55:34 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:34.160241 | orchestrator | 2025-04-01 19:55:34 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:37.204197 | orchestrator | 2025-04-01 19:55:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:37.204358 | orchestrator | 2025-04-01 19:55:37 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:37.205310 | orchestrator | 2025-04-01 19:55:37 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:37.206338 | orchestrator | 2025-04-01 19:55:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:37.208063 | orchestrator | 2025-04-01 19:55:37 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:37.209107 | orchestrator | 2025-04-01 19:55:37 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:40.244485 | orchestrator | 2025-04-01 19:55:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:40.244624 | orchestrator | 2025-04-01 19:55:40 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:40.251952 | orchestrator | 2025-04-01 19:55:40 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:40.252944 | orchestrator | 2025-04-01 19:55:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:40.252976 | orchestrator | 2025-04-01 19:55:40 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:40.253800 | orchestrator | 2025-04-01 19:55:40 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:43.294932 | orchestrator | 2025-04-01 19:55:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:43.295080 | orchestrator | 2025-04-01 19:55:43 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:43.295305 | orchestrator | 2025-04-01 19:55:43 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:43.296173 | orchestrator | 2025-04-01 19:55:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:43.297739 | orchestrator | 2025-04-01 19:55:43 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:43.298753 | orchestrator | 2025-04-01 19:55:43 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:46.357682 | orchestrator | 2025-04-01 19:55:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:46.357863 | orchestrator | 2025-04-01 19:55:46 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:46.359520 | orchestrator | 2025-04-01 19:55:46 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:46.361798 | orchestrator | 2025-04-01 19:55:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:46.363039 | orchestrator | 2025-04-01 19:55:46 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:46.364543 | orchestrator | 2025-04-01 19:55:46 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:46.364576 | orchestrator | 2025-04-01 19:55:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:49.423154 | orchestrator | 2025-04-01 19:55:49 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:49.426861 | orchestrator | 2025-04-01 19:55:49 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:49.428432 | orchestrator | 2025-04-01 19:55:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:49.430007 | orchestrator | 2025-04-01 19:55:49 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:49.439259 | orchestrator | 2025-04-01 19:55:49 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:52.483612 | orchestrator | 2025-04-01 19:55:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:52.483887 | orchestrator | 2025-04-01 19:55:52 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:52.484640 | orchestrator | 2025-04-01 19:55:52 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:52.484674 | orchestrator | 2025-04-01 19:55:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:52.486113 | orchestrator | 2025-04-01 19:55:52 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:52.486887 | orchestrator | 2025-04-01 19:55:52 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state STARTED 2025-04-01 19:55:55.535366 | orchestrator | 2025-04-01 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:55.535511 | orchestrator | 2025-04-01 19:55:55 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:55:55.535819 | orchestrator | 2025-04-01 19:55:55 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:55.540687 | orchestrator | 2025-04-01 19:55:55 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:55.542791 | orchestrator | 2025-04-01 19:55:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:55.543989 | orchestrator | 2025-04-01 19:55:55 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:55.545161 | orchestrator | 2025-04-01 19:55:55 | INFO  | Task 8172e6db-2aa3-4d41-879d-658228d6d8ec is in state SUCCESS 2025-04-01 19:55:55.545322 | orchestrator | 2025-04-01 19:55:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:55:55.546908 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-01 19:55:55.546954 | orchestrator | 2025-04-01 19:55:55.547300 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-04-01 19:55:55.547321 | orchestrator | 2025-04-01 19:55:55.547337 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-04-01 19:55:55.547352 | orchestrator | Tuesday 01 April 2025 19:54:08 +0000 (0:00:00.485) 0:00:00.485 ********* 2025-04-01 19:55:55.547366 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547392 | orchestrator | 2025-04-01 19:55:55.547406 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-04-01 19:55:55.547421 | orchestrator | Tuesday 01 April 2025 19:54:10 +0000 (0:00:02.278) 0:00:02.763 ********* 2025-04-01 19:55:55.547434 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547449 | orchestrator | 2025-04-01 19:55:55.547463 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-04-01 19:55:55.547477 | orchestrator | Tuesday 01 April 2025 19:54:12 +0000 (0:00:01.468) 0:00:04.232 ********* 2025-04-01 19:55:55.547491 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547505 | orchestrator | 2025-04-01 19:55:55.547520 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-04-01 19:55:55.547534 | orchestrator | Tuesday 01 April 2025 19:54:13 +0000 (0:00:01.229) 0:00:05.462 ********* 2025-04-01 19:55:55.547548 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547562 | orchestrator | 2025-04-01 19:55:55.547576 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-04-01 19:55:55.547590 | orchestrator | Tuesday 01 April 2025 19:54:14 +0000 (0:00:01.046) 0:00:06.509 ********* 2025-04-01 19:55:55.547604 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547618 | orchestrator | 2025-04-01 19:55:55.547632 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-04-01 19:55:55.547647 | orchestrator | Tuesday 01 April 2025 19:54:15 +0000 (0:00:01.154) 0:00:07.663 ********* 2025-04-01 19:55:55.547661 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547675 | orchestrator | 2025-04-01 19:55:55.547712 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-04-01 19:55:55.547734 | orchestrator | Tuesday 01 April 2025 19:54:16 +0000 (0:00:01.057) 0:00:08.721 ********* 2025-04-01 19:55:55.547748 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547763 | orchestrator | 2025-04-01 19:55:55.547777 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-04-01 19:55:55.547791 | orchestrator | Tuesday 01 April 2025 19:54:19 +0000 (0:00:02.287) 0:00:11.008 ********* 2025-04-01 19:55:55.547805 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547819 | orchestrator | 2025-04-01 19:55:55.547833 | orchestrator | TASK [Create admin user] ******************************************************* 2025-04-01 19:55:55.547846 | orchestrator | Tuesday 01 April 2025 19:54:20 +0000 (0:00:01.509) 0:00:12.517 ********* 2025-04-01 19:55:55.547860 | orchestrator | changed: [testbed-manager] 2025-04-01 19:55:55.547874 | orchestrator | 2025-04-01 19:55:55.547888 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-04-01 19:55:55.547902 | orchestrator | Tuesday 01 April 2025 19:54:36 +0000 (0:00:15.889) 0:00:28.407 ********* 2025-04-01 19:55:55.547916 | orchestrator | skipping: [testbed-manager] 2025-04-01 19:55:55.547930 | orchestrator | 2025-04-01 19:55:55.547944 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-01 19:55:55.547958 | orchestrator | 2025-04-01 19:55:55.547972 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-01 19:55:55.547986 | orchestrator | Tuesday 01 April 2025 19:54:37 +0000 (0:00:00.897) 0:00:29.304 ********* 2025-04-01 19:55:55.548000 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.548013 | orchestrator | 2025-04-01 19:55:55.548027 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-01 19:55:55.548041 | orchestrator | 2025-04-01 19:55:55.548055 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-01 19:55:55.548069 | orchestrator | Tuesday 01 April 2025 19:54:39 +0000 (0:00:02.261) 0:00:31.566 ********* 2025-04-01 19:55:55.548092 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:55:55.548106 | orchestrator | 2025-04-01 19:55:55.548120 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-01 19:55:55.548134 | orchestrator | 2025-04-01 19:55:55.548148 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-01 19:55:55.548162 | orchestrator | Tuesday 01 April 2025 19:54:41 +0000 (0:00:01.782) 0:00:33.348 ********* 2025-04-01 19:55:55.548176 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:55:55.548190 | orchestrator | 2025-04-01 19:55:55.548204 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:55:55.548219 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-01 19:55:55.548234 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:55:55.548249 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:55:55.548263 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:55:55.548277 | orchestrator | 2025-04-01 19:55:55.548291 | orchestrator | 2025-04-01 19:55:55.548305 | orchestrator | 2025-04-01 19:55:55.548319 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:55:55.548333 | orchestrator | Tuesday 01 April 2025 19:54:43 +0000 (0:00:01.677) 0:00:35.025 ********* 2025-04-01 19:55:55.548347 | orchestrator | =============================================================================== 2025-04-01 19:55:55.548361 | orchestrator | Create admin user ------------------------------------------------------ 15.89s 2025-04-01 19:55:55.548411 | orchestrator | Restart ceph manager service -------------------------------------------- 5.73s 2025-04-01 19:55:55.548427 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.29s 2025-04-01 19:55:55.548441 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.28s 2025-04-01 19:55:55.548455 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.51s 2025-04-01 19:55:55.548469 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.47s 2025-04-01 19:55:55.548483 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.23s 2025-04-01 19:55:55.548497 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.15s 2025-04-01 19:55:55.548511 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.06s 2025-04-01 19:55:55.548525 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.05s 2025-04-01 19:55:55.548540 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.90s 2025-04-01 19:55:55.548553 | orchestrator | 2025-04-01 19:55:55.548568 | orchestrator | 2025-04-01 19:55:55.548581 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:55:55.548596 | orchestrator | 2025-04-01 19:55:55.548616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:55:55.548630 | orchestrator | Tuesday 01 April 2025 19:53:38 +0000 (0:00:00.376) 0:00:00.376 ********* 2025-04-01 19:55:55.548645 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:55:55.548660 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:55:55.548674 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:55:55.548688 | orchestrator | 2025-04-01 19:55:55.548723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:55:55.548737 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.491) 0:00:00.868 ********* 2025-04-01 19:55:55.548752 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-04-01 19:55:55.548766 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-04-01 19:55:55.548780 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-04-01 19:55:55.548802 | orchestrator | 2025-04-01 19:55:55.548817 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-04-01 19:55:55.548831 | orchestrator | 2025-04-01 19:55:55.548845 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-01 19:55:55.548859 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.333) 0:00:01.201 ********* 2025-04-01 19:55:55.548874 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:55:55.548889 | orchestrator | 2025-04-01 19:55:55.548903 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-04-01 19:55:55.548917 | orchestrator | Tuesday 01 April 2025 19:53:40 +0000 (0:00:01.036) 0:00:02.238 ********* 2025-04-01 19:55:55.548931 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-04-01 19:55:55.548946 | orchestrator | 2025-04-01 19:55:55.548960 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-04-01 19:55:55.548974 | orchestrator | Tuesday 01 April 2025 19:53:44 +0000 (0:00:04.230) 0:00:06.469 ********* 2025-04-01 19:55:55.548988 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-04-01 19:55:55.549002 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-04-01 19:55:55.549016 | orchestrator | 2025-04-01 19:55:55.549030 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-04-01 19:55:55.549045 | orchestrator | Tuesday 01 April 2025 19:53:51 +0000 (0:00:06.251) 0:00:12.720 ********* 2025-04-01 19:55:55.549059 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 19:55:55.549073 | orchestrator | 2025-04-01 19:55:55.549087 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-04-01 19:55:55.549101 | orchestrator | Tuesday 01 April 2025 19:53:55 +0000 (0:00:04.001) 0:00:16.722 ********* 2025-04-01 19:55:55.549115 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 19:55:55.549129 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-04-01 19:55:55.549143 | orchestrator | 2025-04-01 19:55:55.549157 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-04-01 19:55:55.549171 | orchestrator | Tuesday 01 April 2025 19:53:58 +0000 (0:00:03.826) 0:00:20.549 ********* 2025-04-01 19:55:55.549185 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 19:55:55.549200 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-04-01 19:55:55.549214 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-04-01 19:55:55.549228 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-04-01 19:55:55.549242 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-04-01 19:55:55.549256 | orchestrator | 2025-04-01 19:55:55.549270 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-04-01 19:55:55.549284 | orchestrator | Tuesday 01 April 2025 19:54:15 +0000 (0:00:16.516) 0:00:37.066 ********* 2025-04-01 19:55:55.549298 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-04-01 19:55:55.549312 | orchestrator | 2025-04-01 19:55:55.549326 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-04-01 19:55:55.549341 | orchestrator | Tuesday 01 April 2025 19:54:19 +0000 (0:00:04.555) 0:00:41.621 ********* 2025-04-01 19:55:55.549366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.549399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.549415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.549431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549539 | orchestrator | 2025-04-01 19:55:55.549554 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-04-01 19:55:55.549568 | orchestrator | Tuesday 01 April 2025 19:54:23 +0000 (0:00:03.740) 0:00:45.362 ********* 2025-04-01 19:55:55.549583 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-04-01 19:55:55.549597 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-04-01 19:55:55.549611 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-04-01 19:55:55.549625 | orchestrator | 2025-04-01 19:55:55.549639 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-04-01 19:55:55.549653 | orchestrator | Tuesday 01 April 2025 19:54:25 +0000 (0:00:02.200) 0:00:47.562 ********* 2025-04-01 19:55:55.549667 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.549681 | orchestrator | 2025-04-01 19:55:55.549721 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-04-01 19:55:55.549737 | orchestrator | Tuesday 01 April 2025 19:54:26 +0000 (0:00:00.139) 0:00:47.702 ********* 2025-04-01 19:55:55.549751 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.549765 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:55:55.549780 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:55:55.549794 | orchestrator | 2025-04-01 19:55:55.549808 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-01 19:55:55.549822 | orchestrator | Tuesday 01 April 2025 19:54:27 +0000 (0:00:00.948) 0:00:48.651 ********* 2025-04-01 19:55:55.549843 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:55:55.549857 | orchestrator | 2025-04-01 19:55:55.549872 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-04-01 19:55:55.549886 | orchestrator | Tuesday 01 April 2025 19:54:28 +0000 (0:00:01.274) 0:00:49.926 ********* 2025-04-01 19:55:55.549909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.549926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.549941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.549958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.549979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550143 | orchestrator | 2025-04-01 19:55:55.550158 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-04-01 19:55:55.550179 | orchestrator | Tuesday 01 April 2025 19:54:33 +0000 (0:00:05.142) 0:00:55.068 ********* 2025-04-01 19:55:55.550195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.550227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550258 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.550273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.550289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550324 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:55:55.550348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.550364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550394 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:55:55.550408 | orchestrator | 2025-04-01 19:55:55.550422 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-04-01 19:55:55.550436 | orchestrator | Tuesday 01 April 2025 19:54:34 +0000 (0:00:01.543) 0:00:56.611 ********* 2025-04-01 19:55:55.550451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.550479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550514 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.550529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.550544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550574 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:55:55.550589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.550612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.550648 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:55:55.550663 | orchestrator | 2025-04-01 19:55:55.550677 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-04-01 19:55:55.550724 | orchestrator | Tuesday 01 April 2025 19:54:36 +0000 (0:00:01.771) 0:00:58.382 ********* 2025-04-01 19:55:55.550773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.550789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.550813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.550837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.550945 | orchestrator | 2025-04-01 19:55:55.550959 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-04-01 19:55:55.550973 | orchestrator | Tuesday 01 April 2025 19:54:41 +0000 (0:00:05.088) 0:01:03.471 ********* 2025-04-01 19:55:55.550988 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.551001 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:55:55.551015 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:55:55.551029 | orchestrator | 2025-04-01 19:55:55.551043 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-04-01 19:55:55.551057 | orchestrator | Tuesday 01 April 2025 19:54:45 +0000 (0:00:03.403) 0:01:06.874 ********* 2025-04-01 19:55:55.551076 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:55:55.551091 | orchestrator | 2025-04-01 19:55:55.551104 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-04-01 19:55:55.551119 | orchestrator | Tuesday 01 April 2025 19:54:48 +0000 (0:00:03.428) 0:01:10.303 ********* 2025-04-01 19:55:55.551133 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.551147 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:55:55.551162 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:55:55.551175 | orchestrator | 2025-04-01 19:55:55.551189 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-04-01 19:55:55.551203 | orchestrator | Tuesday 01 April 2025 19:54:50 +0000 (0:00:02.158) 0:01:12.461 ********* 2025-04-01 19:55:55.551218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.551250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.551266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.551281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.551302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.551317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.551348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.551364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.551379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.551393 | orchestrator | 2025-04-01 19:55:55.551408 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-04-01 19:55:55.551422 | orchestrator | Tuesday 01 April 2025 19:55:05 +0000 (0:00:14.770) 0:01:27.231 ********* 2025-04-01 19:55:55.551444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.551469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.551491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.551506 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:55:55.551521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.551536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.551551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.551565 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.551596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-01 19:55:55.551620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.551635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:55:55.551649 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:55:55.551663 | orchestrator | 2025-04-01 19:55:55.551678 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-04-01 19:55:55.551712 | orchestrator | Tuesday 01 April 2025 19:55:07 +0000 (0:00:02.141) 0:01:29.373 ********* 2025-04-01 19:55:55.551728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.551750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.551985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-01 19:55:55.552011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.552027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.552042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.552057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.552079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.552101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:55:55.552115 | orchestrator | 2025-04-01 19:55:55.552130 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-01 19:55:55.552144 | orchestrator | Tuesday 01 April 2025 19:55:12 +0000 (0:00:04.770) 0:01:34.143 ********* 2025-04-01 19:55:55.552159 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:55:55.552173 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:55:55.552187 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:55:55.552201 | orchestrator | 2025-04-01 19:55:55.552215 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-04-01 19:55:55.552229 | orchestrator | Tuesday 01 April 2025 19:55:12 +0000 (0:00:00.367) 0:01:34.511 ********* 2025-04-01 19:55:55.552243 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.552257 | orchestrator | 2025-04-01 19:55:55.552271 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-04-01 19:55:55.552285 | orchestrator | Tuesday 01 April 2025 19:55:15 +0000 (0:00:02.815) 0:01:37.326 ********* 2025-04-01 19:55:55.552299 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.552313 | orchestrator | 2025-04-01 19:55:55.552327 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-04-01 19:55:55.552340 | orchestrator | Tuesday 01 April 2025 19:55:18 +0000 (0:00:03.164) 0:01:40.491 ********* 2025-04-01 19:55:55.552354 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.552368 | orchestrator | 2025-04-01 19:55:55.552383 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-01 19:55:55.552396 | orchestrator | Tuesday 01 April 2025 19:55:30 +0000 (0:00:11.862) 0:01:52.355 ********* 2025-04-01 19:55:55.552410 | orchestrator | 2025-04-01 19:55:55.552424 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-01 19:55:55.552438 | orchestrator | Tuesday 01 April 2025 19:55:30 +0000 (0:00:00.233) 0:01:52.589 ********* 2025-04-01 19:55:55.552452 | orchestrator | 2025-04-01 19:55:55.552466 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-01 19:55:55.552480 | orchestrator | Tuesday 01 April 2025 19:55:31 +0000 (0:00:00.647) 0:01:53.236 ********* 2025-04-01 19:55:55.552494 | orchestrator | 2025-04-01 19:55:55.552508 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-04-01 19:55:55.552527 | orchestrator | Tuesday 01 April 2025 19:55:31 +0000 (0:00:00.149) 0:01:53.385 ********* 2025-04-01 19:55:55.552541 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.552555 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:55:55.552575 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:55:55.552589 | orchestrator | 2025-04-01 19:55:55.552603 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-04-01 19:55:55.552617 | orchestrator | Tuesday 01 April 2025 19:55:39 +0000 (0:00:07.557) 0:02:00.943 ********* 2025-04-01 19:55:55.552631 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.552645 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:55:55.552659 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:55:55.552673 | orchestrator | 2025-04-01 19:55:55.552687 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-04-01 19:55:55.552719 | orchestrator | Tuesday 01 April 2025 19:55:45 +0000 (0:00:06.533) 0:02:07.477 ********* 2025-04-01 19:55:55.552740 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:55:55.552754 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:55:55.552768 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:55:55.552782 | orchestrator | 2025-04-01 19:55:55.552796 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:55:55.552810 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 19:55:55.552825 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:55:55.552840 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:55:55.552854 | orchestrator | 2025-04-01 19:55:55.552868 | orchestrator | 2025-04-01 19:55:55.552882 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:55:55.552902 | orchestrator | Tuesday 01 April 2025 19:55:53 +0000 (0:00:07.530) 0:02:15.007 ********* 2025-04-01 19:55:58.590087 | orchestrator | =============================================================================== 2025-04-01 19:55:58.590202 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.52s 2025-04-01 19:55:58.590220 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 14.77s 2025-04-01 19:55:58.590235 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.86s 2025-04-01 19:55:58.590250 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.56s 2025-04-01 19:55:58.590264 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.53s 2025-04-01 19:55:58.590278 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.53s 2025-04-01 19:55:58.590292 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.25s 2025-04-01 19:55:58.590306 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.14s 2025-04-01 19:55:58.590320 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.09s 2025-04-01 19:55:58.590335 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.77s 2025-04-01 19:55:58.590349 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.56s 2025-04-01 19:55:58.590363 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.23s 2025-04-01 19:55:58.590377 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.00s 2025-04-01 19:55:58.590391 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2025-04-01 19:55:58.590405 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.74s 2025-04-01 19:55:58.590419 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 3.43s 2025-04-01 19:55:58.590433 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.40s 2025-04-01 19:55:58.590446 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 3.16s 2025-04-01 19:55:58.590461 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.82s 2025-04-01 19:55:58.590475 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.20s 2025-04-01 19:55:58.590504 | orchestrator | 2025-04-01 19:55:58 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:55:58.591740 | orchestrator | 2025-04-01 19:55:58 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:55:58.591880 | orchestrator | 2025-04-01 19:55:58 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:55:58.592768 | orchestrator | 2025-04-01 19:55:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:55:58.593456 | orchestrator | 2025-04-01 19:55:58 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:55:58.593622 | orchestrator | 2025-04-01 19:55:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:01.632501 | orchestrator | 2025-04-01 19:56:01 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:01.633161 | orchestrator | 2025-04-01 19:56:01 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:01.633999 | orchestrator | 2025-04-01 19:56:01 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:01.635114 | orchestrator | 2025-04-01 19:56:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:01.636192 | orchestrator | 2025-04-01 19:56:01 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:04.672897 | orchestrator | 2025-04-01 19:56:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:04.673052 | orchestrator | 2025-04-01 19:56:04 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:04.673668 | orchestrator | 2025-04-01 19:56:04 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:04.675590 | orchestrator | 2025-04-01 19:56:04 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:04.677470 | orchestrator | 2025-04-01 19:56:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:04.679454 | orchestrator | 2025-04-01 19:56:04 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:04.679484 | orchestrator | 2025-04-01 19:56:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:07.731156 | orchestrator | 2025-04-01 19:56:07 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:07.732731 | orchestrator | 2025-04-01 19:56:07 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:07.734482 | orchestrator | 2025-04-01 19:56:07 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:07.735513 | orchestrator | 2025-04-01 19:56:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:07.736332 | orchestrator | 2025-04-01 19:56:07 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:10.770151 | orchestrator | 2025-04-01 19:56:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:10.770338 | orchestrator | 2025-04-01 19:56:10 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:10.770614 | orchestrator | 2025-04-01 19:56:10 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:10.771625 | orchestrator | 2025-04-01 19:56:10 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:10.772898 | orchestrator | 2025-04-01 19:56:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:10.774149 | orchestrator | 2025-04-01 19:56:10 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:13.807086 | orchestrator | 2025-04-01 19:56:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:13.807221 | orchestrator | 2025-04-01 19:56:13 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:13.807829 | orchestrator | 2025-04-01 19:56:13 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:13.808746 | orchestrator | 2025-04-01 19:56:13 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:13.809556 | orchestrator | 2025-04-01 19:56:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:13.810415 | orchestrator | 2025-04-01 19:56:13 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:13.810618 | orchestrator | 2025-04-01 19:56:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:16.857922 | orchestrator | 2025-04-01 19:56:16 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:16.858297 | orchestrator | 2025-04-01 19:56:16 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:16.862685 | orchestrator | 2025-04-01 19:56:16 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:19.910665 | orchestrator | 2025-04-01 19:56:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:19.910836 | orchestrator | 2025-04-01 19:56:16 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:19.910857 | orchestrator | 2025-04-01 19:56:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:19.910891 | orchestrator | 2025-04-01 19:56:19 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:19.914220 | orchestrator | 2025-04-01 19:56:19 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:19.915258 | orchestrator | 2025-04-01 19:56:19 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:19.916148 | orchestrator | 2025-04-01 19:56:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:19.917225 | orchestrator | 2025-04-01 19:56:19 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:22.970071 | orchestrator | 2025-04-01 19:56:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:22.970210 | orchestrator | 2025-04-01 19:56:22 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:22.972138 | orchestrator | 2025-04-01 19:56:22 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:22.973490 | orchestrator | 2025-04-01 19:56:22 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:22.974872 | orchestrator | 2025-04-01 19:56:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:22.976534 | orchestrator | 2025-04-01 19:56:22 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:26.018788 | orchestrator | 2025-04-01 19:56:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:26.018952 | orchestrator | 2025-04-01 19:56:26 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:26.019521 | orchestrator | 2025-04-01 19:56:26 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:26.019557 | orchestrator | 2025-04-01 19:56:26 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:26.020137 | orchestrator | 2025-04-01 19:56:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:26.021777 | orchestrator | 2025-04-01 19:56:26 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:29.066467 | orchestrator | 2025-04-01 19:56:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:29.066571 | orchestrator | 2025-04-01 19:56:29 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:29.067996 | orchestrator | 2025-04-01 19:56:29 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:29.069089 | orchestrator | 2025-04-01 19:56:29 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:29.069104 | orchestrator | 2025-04-01 19:56:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:29.076613 | orchestrator | 2025-04-01 19:56:29 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:32.113341 | orchestrator | 2025-04-01 19:56:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:32.113476 | orchestrator | 2025-04-01 19:56:32 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:32.114084 | orchestrator | 2025-04-01 19:56:32 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:32.115860 | orchestrator | 2025-04-01 19:56:32 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:32.116812 | orchestrator | 2025-04-01 19:56:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:32.117888 | orchestrator | 2025-04-01 19:56:32 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:35.157387 | orchestrator | 2025-04-01 19:56:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:35.157573 | orchestrator | 2025-04-01 19:56:35 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:35.157753 | orchestrator | 2025-04-01 19:56:35 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:35.158448 | orchestrator | 2025-04-01 19:56:35 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:35.159175 | orchestrator | 2025-04-01 19:56:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:35.159884 | orchestrator | 2025-04-01 19:56:35 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:38.205384 | orchestrator | 2025-04-01 19:56:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:38.205560 | orchestrator | 2025-04-01 19:56:38 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:38.208107 | orchestrator | 2025-04-01 19:56:38 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:38.210679 | orchestrator | 2025-04-01 19:56:38 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:38.212783 | orchestrator | 2025-04-01 19:56:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:38.214782 | orchestrator | 2025-04-01 19:56:38 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:41.276596 | orchestrator | 2025-04-01 19:56:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:41.276833 | orchestrator | 2025-04-01 19:56:41 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:41.278128 | orchestrator | 2025-04-01 19:56:41 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:41.278161 | orchestrator | 2025-04-01 19:56:41 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:41.279613 | orchestrator | 2025-04-01 19:56:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:41.281178 | orchestrator | 2025-04-01 19:56:41 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:44.315841 | orchestrator | 2025-04-01 19:56:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:44.316130 | orchestrator | 2025-04-01 19:56:44 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:44.316391 | orchestrator | 2025-04-01 19:56:44 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:44.316428 | orchestrator | 2025-04-01 19:56:44 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:44.317052 | orchestrator | 2025-04-01 19:56:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:44.318849 | orchestrator | 2025-04-01 19:56:44 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:47.378811 | orchestrator | 2025-04-01 19:56:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:47.378981 | orchestrator | 2025-04-01 19:56:47 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:47.379280 | orchestrator | 2025-04-01 19:56:47 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:47.380134 | orchestrator | 2025-04-01 19:56:47 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:47.380973 | orchestrator | 2025-04-01 19:56:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:47.381811 | orchestrator | 2025-04-01 19:56:47 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:50.428171 | orchestrator | 2025-04-01 19:56:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:50.428270 | orchestrator | 2025-04-01 19:56:50 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:50.429768 | orchestrator | 2025-04-01 19:56:50 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:50.431127 | orchestrator | 2025-04-01 19:56:50 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:50.432148 | orchestrator | 2025-04-01 19:56:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:50.433596 | orchestrator | 2025-04-01 19:56:50 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:50.434100 | orchestrator | 2025-04-01 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:53.471649 | orchestrator | 2025-04-01 19:56:53 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:53.473905 | orchestrator | 2025-04-01 19:56:53 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:53.474754 | orchestrator | 2025-04-01 19:56:53 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:53.475796 | orchestrator | 2025-04-01 19:56:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:53.477886 | orchestrator | 2025-04-01 19:56:53 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:53.478845 | orchestrator | 2025-04-01 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:56.517442 | orchestrator | 2025-04-01 19:56:56 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:56.517594 | orchestrator | 2025-04-01 19:56:56 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:56.518443 | orchestrator | 2025-04-01 19:56:56 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:56.519032 | orchestrator | 2025-04-01 19:56:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:56.519807 | orchestrator | 2025-04-01 19:56:56 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:56:59.558240 | orchestrator | 2025-04-01 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:56:59.558382 | orchestrator | 2025-04-01 19:56:59 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:56:59.558609 | orchestrator | 2025-04-01 19:56:59 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:56:59.559523 | orchestrator | 2025-04-01 19:56:59 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:56:59.560281 | orchestrator | 2025-04-01 19:56:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:56:59.561105 | orchestrator | 2025-04-01 19:56:59 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:02.612239 | orchestrator | 2025-04-01 19:56:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:02.612373 | orchestrator | 2025-04-01 19:57:02 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:02.613629 | orchestrator | 2025-04-01 19:57:02 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:02.615098 | orchestrator | 2025-04-01 19:57:02 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:57:02.616369 | orchestrator | 2025-04-01 19:57:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:02.618134 | orchestrator | 2025-04-01 19:57:02 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:05.656682 | orchestrator | 2025-04-01 19:57:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:05.656855 | orchestrator | 2025-04-01 19:57:05 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:05.659122 | orchestrator | 2025-04-01 19:57:05 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:05.659966 | orchestrator | 2025-04-01 19:57:05 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:57:05.660778 | orchestrator | 2025-04-01 19:57:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:05.664132 | orchestrator | 2025-04-01 19:57:05 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:08.712158 | orchestrator | 2025-04-01 19:57:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:08.712279 | orchestrator | 2025-04-01 19:57:08 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:08.714888 | orchestrator | 2025-04-01 19:57:08 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:08.717761 | orchestrator | 2025-04-01 19:57:08 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:57:08.720764 | orchestrator | 2025-04-01 19:57:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:08.721543 | orchestrator | 2025-04-01 19:57:08 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:11.775995 | orchestrator | 2025-04-01 19:57:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:11.776110 | orchestrator | 2025-04-01 19:57:11 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:11.776886 | orchestrator | 2025-04-01 19:57:11 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:11.778203 | orchestrator | 2025-04-01 19:57:11 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state STARTED 2025-04-01 19:57:11.779301 | orchestrator | 2025-04-01 19:57:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:11.780141 | orchestrator | 2025-04-01 19:57:11 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:11.780750 | orchestrator | 2025-04-01 19:57:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:14.858185 | orchestrator | 2025-04-01 19:57:14 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:14.863316 | orchestrator | 2025-04-01 19:57:14 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:14.870180 | orchestrator | 2025-04-01 19:57:14.870224 | orchestrator | 2025-04-01 19:57:14.870240 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:57:14.870256 | orchestrator | 2025-04-01 19:57:14.870271 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:57:14.870302 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.384) 0:00:00.384 ********* 2025-04-01 19:57:14.870325 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:57:14.870341 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:57:14.870451 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:57:14.870471 | orchestrator | 2025-04-01 19:57:14.870587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:57:14.870607 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.507) 0:00:00.892 ********* 2025-04-01 19:57:14.870623 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-04-01 19:57:14.870637 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-04-01 19:57:14.870651 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-04-01 19:57:14.870665 | orchestrator | 2025-04-01 19:57:14.870679 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-04-01 19:57:14.870693 | orchestrator | 2025-04-01 19:57:14.870742 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-01 19:57:14.870758 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.400) 0:00:01.293 ********* 2025-04-01 19:57:14.870773 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:57:14.870790 | orchestrator | 2025-04-01 19:57:14.871392 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-04-01 19:57:14.871413 | orchestrator | Tuesday 01 April 2025 19:53:40 +0000 (0:00:00.850) 0:00:02.144 ********* 2025-04-01 19:57:14.871565 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-04-01 19:57:14.871585 | orchestrator | 2025-04-01 19:57:14.871600 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-04-01 19:57:14.871931 | orchestrator | Tuesday 01 April 2025 19:53:45 +0000 (0:00:04.310) 0:00:06.454 ********* 2025-04-01 19:57:14.871955 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-04-01 19:57:14.871970 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-04-01 19:57:14.871985 | orchestrator | 2025-04-01 19:57:14.871999 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-04-01 19:57:14.872013 | orchestrator | Tuesday 01 April 2025 19:53:50 +0000 (0:00:05.825) 0:00:12.280 ********* 2025-04-01 19:57:14.872028 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-04-01 19:57:14.872042 | orchestrator | 2025-04-01 19:57:14.872055 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-04-01 19:57:14.872069 | orchestrator | Tuesday 01 April 2025 19:53:55 +0000 (0:00:04.432) 0:00:16.712 ********* 2025-04-01 19:57:14.872109 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 19:57:14.872124 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-04-01 19:57:14.872138 | orchestrator | 2025-04-01 19:57:14.872152 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-04-01 19:57:14.872165 | orchestrator | Tuesday 01 April 2025 19:53:59 +0000 (0:00:03.944) 0:00:20.657 ********* 2025-04-01 19:57:14.872179 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 19:57:14.872193 | orchestrator | 2025-04-01 19:57:14.872207 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-04-01 19:57:14.872221 | orchestrator | Tuesday 01 April 2025 19:54:02 +0000 (0:00:02.924) 0:00:23.582 ********* 2025-04-01 19:57:14.872235 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-04-01 19:57:14.872249 | orchestrator | 2025-04-01 19:57:14.872263 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-04-01 19:57:14.872277 | orchestrator | Tuesday 01 April 2025 19:54:06 +0000 (0:00:04.604) 0:00:28.186 ********* 2025-04-01 19:57:14.872293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.872349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.872367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.872383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.872682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.872740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.872784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.872801 | orchestrator | 2025-04-01 19:57:14.872817 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-04-01 19:57:14.872832 | orchestrator | Tuesday 01 April 2025 19:54:10 +0000 (0:00:03.333) 0:00:31.519 ********* 2025-04-01 19:57:14.872847 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.872863 | orchestrator | 2025-04-01 19:57:14.872878 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-04-01 19:57:14.872894 | orchestrator | Tuesday 01 April 2025 19:54:10 +0000 (0:00:00.131) 0:00:31.650 ********* 2025-04-01 19:57:14.872909 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.872925 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:14.872939 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:14.872953 | orchestrator | 2025-04-01 19:57:14.872967 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-01 19:57:14.872981 | orchestrator | Tuesday 01 April 2025 19:54:10 +0000 (0:00:00.460) 0:00:32.111 ********* 2025-04-01 19:57:14.873003 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:57:14.873089 | orchestrator | 2025-04-01 19:57:14.873104 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-04-01 19:57:14.873118 | orchestrator | Tuesday 01 April 2025 19:54:11 +0000 (0:00:00.731) 0:00:32.842 ********* 2025-04-01 19:57:14.873133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.873148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.873163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.873211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.873529 | orchestrator | 2025-04-01 19:57:14.873543 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-04-01 19:57:14.873558 | orchestrator | Tuesday 01 April 2025 19:54:18 +0000 (0:00:07.325) 0:00:40.167 ********* 2025-04-01 19:57:14.873572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.873587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.873602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.873617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.873631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.873681 | orchestrator | skip2025-04-01 19:57:14 | INFO  | Task d5a76ddc-d4e6-418c-b083-27476f7e45f6 is in state SUCCESS 2025-04-01 19:57:14.873698 | orchestrator | ping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874090 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.874150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.874169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.874184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.874433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874446 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:14.874460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.874473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874574 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:14.874587 | orchestrator | 2025-04-01 19:57:14.874601 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-04-01 19:57:14.874615 | orchestrator | Tuesday 01 April 2025 19:54:22 +0000 (0:00:03.612) 0:00:43.780 ********* 2025-04-01 19:57:14.874627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.874641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.874654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874814 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.874828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.874842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.874855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.874947 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:14.874960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.874973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.874987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875078 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:14.875091 | orchestrator | 2025-04-01 19:57:14.875104 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-04-01 19:57:14.875117 | orchestrator | Tuesday 01 April 2025 19:54:24 +0000 (0:00:02.031) 0:00:45.811 ********* 2025-04-01 19:57:14.875130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.875143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.875157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.875177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.875532 | orchestrator | 2025-04-01 19:57:14.875545 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-04-01 19:57:14.875558 | orchestrator | Tuesday 01 April 2025 19:54:32 +0000 (0:00:08.128) 0:00:53.940 ********* 2025-04-01 19:57:14.875571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.875584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.875627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.875862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.875989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876208 | orchestrator | 2025-04-01 19:57:14.876221 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-04-01 19:57:14.876234 | orchestrator | Tuesday 01 April 2025 19:55:02 +0000 (0:00:30.134) 0:01:24.074 ********* 2025-04-01 19:57:14.876247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-01 19:57:14.876262 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-01 19:57:14.876275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-01 19:57:14.876287 | orchestrator | 2025-04-01 19:57:14.876300 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-04-01 19:57:14.876339 | orchestrator | Tuesday 01 April 2025 19:55:12 +0000 (0:00:10.252) 0:01:34.327 ********* 2025-04-01 19:57:14.876353 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-01 19:57:14.876366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-01 19:57:14.876378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-01 19:57:14.876391 | orchestrator | 2025-04-01 19:57:14.876403 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-04-01 19:57:14.876416 | orchestrator | Tuesday 01 April 2025 19:55:18 +0000 (0:00:05.597) 0:01:39.925 ********* 2025-04-01 19:57:14.876429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.876449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.876462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.876477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876804 | orchestrator | 2025-04-01 19:57:14.876817 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-04-01 19:57:14.876831 | orchestrator | Tuesday 01 April 2025 19:55:23 +0000 (0:00:04.810) 0:01:44.735 ********* 2025-04-01 19:57:14.876844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.876857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.876876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.876900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.876925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.876989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.877028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.877093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.877118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.877230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877245 | orchestrator | 2025-04-01 19:57:14.877258 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-01 19:57:14.877272 | orchestrator | Tuesday 01 April 2025 19:55:26 +0000 (0:00:03.296) 0:01:48.031 ********* 2025-04-01 19:57:14.877292 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.877306 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:14.877319 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:14.877332 | orchestrator | 2025-04-01 19:57:14.877345 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-04-01 19:57:14.877358 | orchestrator | Tuesday 01 April 2025 19:55:27 +0000 (0:00:00.539) 0:01:48.571 ********* 2025-04-01 19:57:14.877379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.877395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.877409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877494 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.877507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.877522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.877536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877621 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:14.877635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-01 19:57:14.877649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-01 19:57:14.877663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.877794 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:14.877807 | orchestrator | 2025-04-01 19:57:14.877819 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-04-01 19:57:14.877832 | orchestrator | Tuesday 01 April 2025 19:55:29 +0000 (0:00:02.001) 0:01:50.572 ********* 2025-04-01 19:57:14.877846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.877954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.877984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-01 19:57:14.878008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.878341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.878354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-01 19:57:14.878373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-01 19:57:14.878386 | orchestrator | 2025-04-01 19:57:14.878399 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-01 19:57:14.878412 | orchestrator | Tuesday 01 April 2025 19:55:35 +0000 (0:00:06.597) 0:01:57.171 ********* 2025-04-01 19:57:14.878424 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:14.878437 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:14.878449 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:14.878461 | orchestrator | 2025-04-01 19:57:14.878474 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-04-01 19:57:14.878486 | orchestrator | Tuesday 01 April 2025 19:55:36 +0000 (0:00:01.023) 0:01:58.194 ********* 2025-04-01 19:57:14.878499 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-04-01 19:57:14.878512 | orchestrator | 2025-04-01 19:57:14.878524 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-04-01 19:57:14.878537 | orchestrator | Tuesday 01 April 2025 19:55:39 +0000 (0:00:02.312) 0:02:00.507 ********* 2025-04-01 19:57:14.878549 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 19:57:14.878561 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-04-01 19:57:14.878574 | orchestrator | 2025-04-01 19:57:14.878586 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-04-01 19:57:14.878598 | orchestrator | Tuesday 01 April 2025 19:55:41 +0000 (0:00:02.415) 0:02:02.923 ********* 2025-04-01 19:57:14.878611 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.878624 | orchestrator | 2025-04-01 19:57:14.878636 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-01 19:57:14.878649 | orchestrator | Tuesday 01 April 2025 19:55:56 +0000 (0:00:14.905) 0:02:17.829 ********* 2025-04-01 19:57:14.878661 | orchestrator | 2025-04-01 19:57:14.878673 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-01 19:57:14.878686 | orchestrator | Tuesday 01 April 2025 19:55:56 +0000 (0:00:00.059) 0:02:17.889 ********* 2025-04-01 19:57:14.878698 | orchestrator | 2025-04-01 19:57:14.878778 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-01 19:57:14.878792 | orchestrator | Tuesday 01 April 2025 19:55:56 +0000 (0:00:00.072) 0:02:17.961 ********* 2025-04-01 19:57:14.878805 | orchestrator | 2025-04-01 19:57:14.878823 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-04-01 19:57:14.878836 | orchestrator | Tuesday 01 April 2025 19:55:56 +0000 (0:00:00.062) 0:02:18.024 ********* 2025-04-01 19:57:14.878848 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.878861 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:14.878873 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:14.878886 | orchestrator | 2025-04-01 19:57:14.878898 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-04-01 19:57:14.878911 | orchestrator | Tuesday 01 April 2025 19:56:07 +0000 (0:00:11.103) 0:02:29.128 ********* 2025-04-01 19:57:14.878923 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.878936 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:14.878948 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:14.878960 | orchestrator | 2025-04-01 19:57:14.878973 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-04-01 19:57:14.878993 | orchestrator | Tuesday 01 April 2025 19:56:15 +0000 (0:00:07.762) 0:02:36.891 ********* 2025-04-01 19:57:14.879006 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.879019 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:14.879031 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:14.879043 | orchestrator | 2025-04-01 19:57:14.879056 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-04-01 19:57:14.879068 | orchestrator | Tuesday 01 April 2025 19:56:25 +0000 (0:00:10.424) 0:02:47.315 ********* 2025-04-01 19:57:14.879081 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.879093 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:14.879106 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:14.879118 | orchestrator | 2025-04-01 19:57:14.879130 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-04-01 19:57:14.879143 | orchestrator | Tuesday 01 April 2025 19:56:38 +0000 (0:00:12.302) 0:02:59.618 ********* 2025-04-01 19:57:14.879155 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.879167 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:14.879180 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:14.879192 | orchestrator | 2025-04-01 19:57:14.879205 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-04-01 19:57:14.879217 | orchestrator | Tuesday 01 April 2025 19:56:51 +0000 (0:00:12.872) 0:03:12.491 ********* 2025-04-01 19:57:14.879230 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.879242 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:14.879255 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:14.879267 | orchestrator | 2025-04-01 19:57:14.879280 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-04-01 19:57:14.879292 | orchestrator | Tuesday 01 April 2025 19:57:06 +0000 (0:00:15.705) 0:03:28.197 ********* 2025-04-01 19:57:14.879305 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:14.879317 | orchestrator | 2025-04-01 19:57:14.879330 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:57:14.879344 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 19:57:14.879358 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:57:14.879371 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:57:14.879384 | orchestrator | 2025-04-01 19:57:14.879396 | orchestrator | 2025-04-01 19:57:14.879409 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:57:14.879422 | orchestrator | Tuesday 01 April 2025 19:57:12 +0000 (0:00:05.981) 0:03:34.178 ********* 2025-04-01 19:57:14.879434 | orchestrator | =============================================================================== 2025-04-01 19:57:14.879447 | orchestrator | designate : Copying over designate.conf -------------------------------- 30.13s 2025-04-01 19:57:14.879459 | orchestrator | designate : Restart designate-worker container ------------------------- 15.71s 2025-04-01 19:57:14.879472 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.91s 2025-04-01 19:57:14.879485 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.87s 2025-04-01 19:57:14.879497 | orchestrator | designate : Restart designate-producer container ----------------------- 12.30s 2025-04-01 19:57:14.879510 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.10s 2025-04-01 19:57:14.879522 | orchestrator | designate : Restart designate-central container ------------------------ 10.42s 2025-04-01 19:57:14.879535 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.25s 2025-04-01 19:57:14.879547 | orchestrator | designate : Copying over config.json files for services ----------------- 8.13s 2025-04-01 19:57:14.879560 | orchestrator | designate : Restart designate-api container ----------------------------- 7.76s 2025-04-01 19:57:14.879582 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.33s 2025-04-01 19:57:14.879595 | orchestrator | designate : Check designate containers ---------------------------------- 6.60s 2025-04-01 19:57:14.879608 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.98s 2025-04-01 19:57:14.879621 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.83s 2025-04-01 19:57:14.879633 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.60s 2025-04-01 19:57:14.879646 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.81s 2025-04-01 19:57:14.879664 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.60s 2025-04-01 19:57:17.928447 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.43s 2025-04-01 19:57:17.928599 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.31s 2025-04-01 19:57:17.928620 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.94s 2025-04-01 19:57:17.928636 | orchestrator | 2025-04-01 19:57:14 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:17.928652 | orchestrator | 2025-04-01 19:57:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:17.928667 | orchestrator | 2025-04-01 19:57:14 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:17.928682 | orchestrator | 2025-04-01 19:57:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:17.928750 | orchestrator | 2025-04-01 19:57:17 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:17.929445 | orchestrator | 2025-04-01 19:57:17 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:17.929478 | orchestrator | 2025-04-01 19:57:17 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:17.930196 | orchestrator | 2025-04-01 19:57:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:17.931338 | orchestrator | 2025-04-01 19:57:17 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:20.983477 | orchestrator | 2025-04-01 19:57:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:20.983648 | orchestrator | 2025-04-01 19:57:20 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:20.985522 | orchestrator | 2025-04-01 19:57:20 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:20.987919 | orchestrator | 2025-04-01 19:57:20 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:20.990098 | orchestrator | 2025-04-01 19:57:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:20.992515 | orchestrator | 2025-04-01 19:57:20 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:24.046361 | orchestrator | 2025-04-01 19:57:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:24.046538 | orchestrator | 2025-04-01 19:57:24 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:24.047860 | orchestrator | 2025-04-01 19:57:24 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:24.048999 | orchestrator | 2025-04-01 19:57:24 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:24.050461 | orchestrator | 2025-04-01 19:57:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:24.052276 | orchestrator | 2025-04-01 19:57:24 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:27.102508 | orchestrator | 2025-04-01 19:57:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:27.102696 | orchestrator | 2025-04-01 19:57:27 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state STARTED 2025-04-01 19:57:27.103918 | orchestrator | 2025-04-01 19:57:27 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:27.105761 | orchestrator | 2025-04-01 19:57:27 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:27.107083 | orchestrator | 2025-04-01 19:57:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:27.109076 | orchestrator | 2025-04-01 19:57:27 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:30.154769 | orchestrator | 2025-04-01 19:57:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:30.155072 | orchestrator | 2025-04-01 19:57:30 | INFO  | Task f331488a-f359-4398-ba9d-5b9a6a5facb6 is in state SUCCESS 2025-04-01 19:57:30.156055 | orchestrator | 2025-04-01 19:57:30.156092 | orchestrator | 2025-04-01 19:57:30.156106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:57:30.156120 | orchestrator | 2025-04-01 19:57:30.156133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:57:30.156146 | orchestrator | Tuesday 01 April 2025 19:56:00 +0000 (0:00:00.510) 0:00:00.510 ********* 2025-04-01 19:57:30.156158 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:57:30.156173 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:57:30.156187 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:57:30.156223 | orchestrator | 2025-04-01 19:57:30.156236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:57:30.156249 | orchestrator | Tuesday 01 April 2025 19:56:01 +0000 (0:00:00.852) 0:00:01.363 ********* 2025-04-01 19:57:30.156263 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-04-01 19:57:30.156277 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-04-01 19:57:30.156290 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-04-01 19:57:30.156302 | orchestrator | 2025-04-01 19:57:30.156315 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-04-01 19:57:30.156328 | orchestrator | 2025-04-01 19:57:30.156341 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-01 19:57:30.156354 | orchestrator | Tuesday 01 April 2025 19:56:01 +0000 (0:00:00.825) 0:00:02.188 ********* 2025-04-01 19:57:30.156367 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:57:30.156382 | orchestrator | 2025-04-01 19:57:30.156394 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-04-01 19:57:30.156407 | orchestrator | Tuesday 01 April 2025 19:56:03 +0000 (0:00:01.570) 0:00:03.759 ********* 2025-04-01 19:57:30.156420 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-04-01 19:57:30.156433 | orchestrator | 2025-04-01 19:57:30.156445 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-04-01 19:57:30.156458 | orchestrator | Tuesday 01 April 2025 19:56:07 +0000 (0:00:04.503) 0:00:08.262 ********* 2025-04-01 19:57:30.156470 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-04-01 19:57:30.156484 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-04-01 19:57:30.156497 | orchestrator | 2025-04-01 19:57:30.156510 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-04-01 19:57:30.156528 | orchestrator | Tuesday 01 April 2025 19:56:14 +0000 (0:00:06.456) 0:00:14.719 ********* 2025-04-01 19:57:30.156541 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 19:57:30.156582 | orchestrator | 2025-04-01 19:57:30.156596 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-04-01 19:57:30.156609 | orchestrator | Tuesday 01 April 2025 19:56:18 +0000 (0:00:04.505) 0:00:19.224 ********* 2025-04-01 19:57:30.156621 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 19:57:30.156635 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-04-01 19:57:30.156649 | orchestrator | 2025-04-01 19:57:30.156663 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-04-01 19:57:30.156677 | orchestrator | Tuesday 01 April 2025 19:56:23 +0000 (0:00:04.572) 0:00:23.796 ********* 2025-04-01 19:57:30.156691 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 19:57:30.156724 | orchestrator | 2025-04-01 19:57:30.156740 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-04-01 19:57:30.156754 | orchestrator | Tuesday 01 April 2025 19:56:26 +0000 (0:00:03.295) 0:00:27.092 ********* 2025-04-01 19:57:30.156769 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-04-01 19:57:30.156783 | orchestrator | 2025-04-01 19:57:30.156798 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-01 19:57:30.156812 | orchestrator | Tuesday 01 April 2025 19:56:31 +0000 (0:00:04.974) 0:00:32.067 ********* 2025-04-01 19:57:30.156826 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:30.156842 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:30.156856 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:30.156870 | orchestrator | 2025-04-01 19:57:30.156884 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-04-01 19:57:30.156898 | orchestrator | Tuesday 01 April 2025 19:56:32 +0000 (0:00:01.068) 0:00:33.136 ********* 2025-04-01 19:57:30.156915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.156949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157027 | orchestrator | 2025-04-01 19:57:30.157040 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-04-01 19:57:30.157058 | orchestrator | Tuesday 01 April 2025 19:56:34 +0000 (0:00:01.799) 0:00:34.935 ********* 2025-04-01 19:57:30.157071 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:30.157084 | orchestrator | 2025-04-01 19:57:30.157097 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-04-01 19:57:30.157109 | orchestrator | Tuesday 01 April 2025 19:56:34 +0000 (0:00:00.137) 0:00:35.073 ********* 2025-04-01 19:57:30.157122 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:30.157135 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:30.157147 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:30.157160 | orchestrator | 2025-04-01 19:57:30.157172 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-01 19:57:30.157190 | orchestrator | Tuesday 01 April 2025 19:56:35 +0000 (0:00:00.478) 0:00:35.551 ********* 2025-04-01 19:57:30.157203 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:57:30.157216 | orchestrator | 2025-04-01 19:57:30.157228 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-04-01 19:57:30.157241 | orchestrator | Tuesday 01 April 2025 19:56:36 +0000 (0:00:01.592) 0:00:37.144 ********* 2025-04-01 19:57:30.157254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157319 | orchestrator | 2025-04-01 19:57:30.157332 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-04-01 19:57:30.157345 | orchestrator | Tuesday 01 April 2025 19:56:39 +0000 (0:00:02.914) 0:00:40.059 ********* 2025-04-01 19:57:30.157369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.157383 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:30.157396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.157409 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:30.157430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.157444 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:30.157456 | orchestrator | 2025-04-01 19:57:30.157476 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-04-01 19:57:30.157488 | orchestrator | Tuesday 01 April 2025 19:56:41 +0000 (0:00:01.933) 0:00:41.992 ********* 2025-04-01 19:57:30.157601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.157617 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:30.157630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.157643 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:30.157669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.157683 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:30.157696 | orchestrator | 2025-04-01 19:57:30.157731 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-04-01 19:57:30.157745 | orchestrator | Tuesday 01 April 2025 19:56:44 +0000 (0:00:02.348) 0:00:44.341 ********* 2025-04-01 19:57:30.157769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157817 | orchestrator | 2025-04-01 19:57:30.157830 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-04-01 19:57:30.157843 | orchestrator | Tuesday 01 April 2025 19:56:46 +0000 (0:00:02.394) 0:00:46.735 ********* 2025-04-01 19:57:30.157866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.157922 | orchestrator | 2025-04-01 19:57:30.157935 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-04-01 19:57:30.157948 | orchestrator | Tuesday 01 April 2025 19:56:50 +0000 (0:00:04.487) 0:00:51.223 ********* 2025-04-01 19:57:30.157961 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-01 19:57:30.157974 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-01 19:57:30.157987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-01 19:57:30.158000 | orchestrator | 2025-04-01 19:57:30.158012 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-04-01 19:57:30.158095 | orchestrator | Tuesday 01 April 2025 19:56:55 +0000 (0:00:04.267) 0:00:55.491 ********* 2025-04-01 19:57:30.158109 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:30.158155 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:30.158168 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:30.158181 | orchestrator | 2025-04-01 19:57:30.158194 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-04-01 19:57:30.158206 | orchestrator | Tuesday 01 April 2025 19:56:57 +0000 (0:00:02.793) 0:00:58.284 ********* 2025-04-01 19:57:30.158220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.158234 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:57:30.158267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.158290 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:57:30.158315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-01 19:57:30.158330 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:57:30.158345 | orchestrator | 2025-04-01 19:57:30.158359 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-04-01 19:57:30.158372 | orchestrator | Tuesday 01 April 2025 19:56:59 +0000 (0:00:01.387) 0:00:59.671 ********* 2025-04-01 19:57:30.158386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.158401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.158425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-01 19:57:30.158447 | orchestrator | 2025-04-01 19:57:30.158461 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-04-01 19:57:30.158475 | orchestrator | Tuesday 01 April 2025 19:57:00 +0000 (0:00:01.561) 0:01:01.233 ********* 2025-04-01 19:57:30.158489 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:30.158502 | orchestrator | 2025-04-01 19:57:30.158515 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-04-01 19:57:30.158529 | orchestrator | Tuesday 01 April 2025 19:57:04 +0000 (0:00:03.809) 0:01:05.043 ********* 2025-04-01 19:57:30.158542 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:30.158556 | orchestrator | 2025-04-01 19:57:30.158569 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-04-01 19:57:30.158583 | orchestrator | Tuesday 01 April 2025 19:57:07 +0000 (0:00:02.613) 0:01:07.656 ********* 2025-04-01 19:57:30.158601 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:30.159878 | orchestrator | 2025-04-01 19:57:30.159901 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-01 19:57:30.159914 | orchestrator | Tuesday 01 April 2025 19:57:19 +0000 (0:00:12.408) 0:01:20.065 ********* 2025-04-01 19:57:30.159927 | orchestrator | 2025-04-01 19:57:30.159939 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-01 19:57:30.159958 | orchestrator | Tuesday 01 April 2025 19:57:19 +0000 (0:00:00.060) 0:01:20.125 ********* 2025-04-01 19:57:30.159971 | orchestrator | 2025-04-01 19:57:30.159984 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-01 19:57:30.159997 | orchestrator | Tuesday 01 April 2025 19:57:19 +0000 (0:00:00.212) 0:01:20.337 ********* 2025-04-01 19:57:30.160009 | orchestrator | 2025-04-01 19:57:30.160021 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-04-01 19:57:30.160034 | orchestrator | Tuesday 01 April 2025 19:57:20 +0000 (0:00:00.077) 0:01:20.415 ********* 2025-04-01 19:57:30.160047 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:57:30.160060 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:57:30.160072 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:57:30.160085 | orchestrator | 2025-04-01 19:57:30.160097 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:57:30.160111 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-01 19:57:30.160126 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:57:30.160139 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 19:57:30.160151 | orchestrator | 2025-04-01 19:57:30.160164 | orchestrator | 2025-04-01 19:57:30.160176 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:57:30.160189 | orchestrator | Tuesday 01 April 2025 19:57:29 +0000 (0:00:09.575) 0:01:29.991 ********* 2025-04-01 19:57:30.160201 | orchestrator | =============================================================================== 2025-04-01 19:57:30.160214 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.41s 2025-04-01 19:57:30.160227 | orchestrator | placement : Restart placement-api container ----------------------------- 9.58s 2025-04-01 19:57:30.160239 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.46s 2025-04-01 19:57:30.160260 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.97s 2025-04-01 19:57:30.160273 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.57s 2025-04-01 19:57:30.160286 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.51s 2025-04-01 19:57:30.160298 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.50s 2025-04-01 19:57:30.160311 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.49s 2025-04-01 19:57:30.160323 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 4.27s 2025-04-01 19:57:30.160336 | orchestrator | placement : Creating placement databases -------------------------------- 3.81s 2025-04-01 19:57:30.160348 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.30s 2025-04-01 19:57:30.160361 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.91s 2025-04-01 19:57:30.160373 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.79s 2025-04-01 19:57:30.160385 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.61s 2025-04-01 19:57:30.160398 | orchestrator | placement : Copying over config.json files for services ----------------- 2.39s 2025-04-01 19:57:30.160410 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.35s 2025-04-01 19:57:30.160422 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.93s 2025-04-01 19:57:30.160435 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.80s 2025-04-01 19:57:30.160448 | orchestrator | placement : include_tasks ----------------------------------------------- 1.59s 2025-04-01 19:57:30.160460 | orchestrator | placement : include_tasks ----------------------------------------------- 1.57s 2025-04-01 19:57:30.160473 | orchestrator | 2025-04-01 19:57:30 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:30.160491 | orchestrator | 2025-04-01 19:57:30 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:30.161680 | orchestrator | 2025-04-01 19:57:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:30.163245 | orchestrator | 2025-04-01 19:57:30 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:33.222641 | orchestrator | 2025-04-01 19:57:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:33.222812 | orchestrator | 2025-04-01 19:57:33 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:33.223983 | orchestrator | 2025-04-01 19:57:33 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:33.224875 | orchestrator | 2025-04-01 19:57:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:33.225894 | orchestrator | 2025-04-01 19:57:33 | INFO  | Task a55e3f6c-2274-46b6-bdff-304e284ce5e8 is in state STARTED 2025-04-01 19:57:33.227087 | orchestrator | 2025-04-01 19:57:33 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:33.227265 | orchestrator | 2025-04-01 19:57:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:36.275646 | orchestrator | 2025-04-01 19:57:36 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:36.277973 | orchestrator | 2025-04-01 19:57:36 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:36.281174 | orchestrator | 2025-04-01 19:57:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:36.282920 | orchestrator | 2025-04-01 19:57:36 | INFO  | Task a55e3f6c-2274-46b6-bdff-304e284ce5e8 is in state SUCCESS 2025-04-01 19:57:36.284904 | orchestrator | 2025-04-01 19:57:36 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:39.340245 | orchestrator | 2025-04-01 19:57:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:39.340381 | orchestrator | 2025-04-01 19:57:39 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:39.341947 | orchestrator | 2025-04-01 19:57:39 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:39.344188 | orchestrator | 2025-04-01 19:57:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:39.347476 | orchestrator | 2025-04-01 19:57:39 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:39.348976 | orchestrator | 2025-04-01 19:57:39 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:39.349311 | orchestrator | 2025-04-01 19:57:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:42.398117 | orchestrator | 2025-04-01 19:57:42 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:42.400934 | orchestrator | 2025-04-01 19:57:42 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:42.403765 | orchestrator | 2025-04-01 19:57:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:42.406385 | orchestrator | 2025-04-01 19:57:42 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:42.407824 | orchestrator | 2025-04-01 19:57:42 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:42.408088 | orchestrator | 2025-04-01 19:57:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:45.456531 | orchestrator | 2025-04-01 19:57:45 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:45.461497 | orchestrator | 2025-04-01 19:57:45 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:45.466615 | orchestrator | 2025-04-01 19:57:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:45.469001 | orchestrator | 2025-04-01 19:57:45 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:45.469247 | orchestrator | 2025-04-01 19:57:45 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:48.519515 | orchestrator | 2025-04-01 19:57:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:48.519682 | orchestrator | 2025-04-01 19:57:48 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:48.523665 | orchestrator | 2025-04-01 19:57:48 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:48.526060 | orchestrator | 2025-04-01 19:57:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:48.526545 | orchestrator | 2025-04-01 19:57:48 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:48.528801 | orchestrator | 2025-04-01 19:57:48 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:51.578704 | orchestrator | 2025-04-01 19:57:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:51.578906 | orchestrator | 2025-04-01 19:57:51 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:51.584418 | orchestrator | 2025-04-01 19:57:51 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:51.590956 | orchestrator | 2025-04-01 19:57:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:51.591019 | orchestrator | 2025-04-01 19:57:51 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:51.592291 | orchestrator | 2025-04-01 19:57:51 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:51.592734 | orchestrator | 2025-04-01 19:57:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:54.663936 | orchestrator | 2025-04-01 19:57:54 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:54.664989 | orchestrator | 2025-04-01 19:57:54 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:54.665611 | orchestrator | 2025-04-01 19:57:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:54.667144 | orchestrator | 2025-04-01 19:57:54 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:54.668554 | orchestrator | 2025-04-01 19:57:54 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:57:57.717747 | orchestrator | 2025-04-01 19:57:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:57:57.717888 | orchestrator | 2025-04-01 19:57:57 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:57:57.720973 | orchestrator | 2025-04-01 19:57:57 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:57:57.721810 | orchestrator | 2025-04-01 19:57:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:57:57.722733 | orchestrator | 2025-04-01 19:57:57 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:57:57.723758 | orchestrator | 2025-04-01 19:57:57 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:00.754890 | orchestrator | 2025-04-01 19:57:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:00.755029 | orchestrator | 2025-04-01 19:58:00 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:00.755169 | orchestrator | 2025-04-01 19:58:00 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:00.756490 | orchestrator | 2025-04-01 19:58:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:00.760430 | orchestrator | 2025-04-01 19:58:00 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:03.806227 | orchestrator | 2025-04-01 19:58:00 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:03.806338 | orchestrator | 2025-04-01 19:58:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:03.806373 | orchestrator | 2025-04-01 19:58:03 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:03.808663 | orchestrator | 2025-04-01 19:58:03 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:03.813416 | orchestrator | 2025-04-01 19:58:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:03.814750 | orchestrator | 2025-04-01 19:58:03 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:03.816544 | orchestrator | 2025-04-01 19:58:03 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:03.817012 | orchestrator | 2025-04-01 19:58:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:06.859151 | orchestrator | 2025-04-01 19:58:06 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:06.860628 | orchestrator | 2025-04-01 19:58:06 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:06.861057 | orchestrator | 2025-04-01 19:58:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:06.862430 | orchestrator | 2025-04-01 19:58:06 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:06.863276 | orchestrator | 2025-04-01 19:58:06 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:09.908610 | orchestrator | 2025-04-01 19:58:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:09.908780 | orchestrator | 2025-04-01 19:58:09 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:09.909055 | orchestrator | 2025-04-01 19:58:09 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:09.910103 | orchestrator | 2025-04-01 19:58:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:09.911091 | orchestrator | 2025-04-01 19:58:09 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:09.912354 | orchestrator | 2025-04-01 19:58:09 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:09.912384 | orchestrator | 2025-04-01 19:58:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:12.949273 | orchestrator | 2025-04-01 19:58:12 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:12.951337 | orchestrator | 2025-04-01 19:58:12 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:12.953366 | orchestrator | 2025-04-01 19:58:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:12.955239 | orchestrator | 2025-04-01 19:58:12 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:12.956776 | orchestrator | 2025-04-01 19:58:12 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:12.956969 | orchestrator | 2025-04-01 19:58:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:16.011645 | orchestrator | 2025-04-01 19:58:16 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:16.012034 | orchestrator | 2025-04-01 19:58:16 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:16.012772 | orchestrator | 2025-04-01 19:58:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:16.013764 | orchestrator | 2025-04-01 19:58:16 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:16.014813 | orchestrator | 2025-04-01 19:58:16 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:19.081964 | orchestrator | 2025-04-01 19:58:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:19.082147 | orchestrator | 2025-04-01 19:58:19 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:19.083929 | orchestrator | 2025-04-01 19:58:19 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:19.083957 | orchestrator | 2025-04-01 19:58:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:19.083978 | orchestrator | 2025-04-01 19:58:19 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:19.084778 | orchestrator | 2025-04-01 19:58:19 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:22.126919 | orchestrator | 2025-04-01 19:58:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:22.127056 | orchestrator | 2025-04-01 19:58:22 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:22.128063 | orchestrator | 2025-04-01 19:58:22 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:22.129526 | orchestrator | 2025-04-01 19:58:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:22.130236 | orchestrator | 2025-04-01 19:58:22 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:22.132363 | orchestrator | 2025-04-01 19:58:22 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:25.179572 | orchestrator | 2025-04-01 19:58:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:25.179771 | orchestrator | 2025-04-01 19:58:25 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:25.179910 | orchestrator | 2025-04-01 19:58:25 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:25.181753 | orchestrator | 2025-04-01 19:58:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:25.184031 | orchestrator | 2025-04-01 19:58:25 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:25.184842 | orchestrator | 2025-04-01 19:58:25 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:28.226470 | orchestrator | 2025-04-01 19:58:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:28.226598 | orchestrator | 2025-04-01 19:58:28 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:28.226877 | orchestrator | 2025-04-01 19:58:28 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:28.227403 | orchestrator | 2025-04-01 19:58:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:28.228764 | orchestrator | 2025-04-01 19:58:28 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:28.228838 | orchestrator | 2025-04-01 19:58:28 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:28.228855 | orchestrator | 2025-04-01 19:58:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:31.254318 | orchestrator | 2025-04-01 19:58:31 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:31.254906 | orchestrator | 2025-04-01 19:58:31 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:31.255447 | orchestrator | 2025-04-01 19:58:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:31.258489 | orchestrator | 2025-04-01 19:58:31 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:34.289862 | orchestrator | 2025-04-01 19:58:31 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:34.289977 | orchestrator | 2025-04-01 19:58:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:34.290063 | orchestrator | 2025-04-01 19:58:34 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:34.290452 | orchestrator | 2025-04-01 19:58:34 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:34.290505 | orchestrator | 2025-04-01 19:58:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:34.291911 | orchestrator | 2025-04-01 19:58:34 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:34.292362 | orchestrator | 2025-04-01 19:58:34 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:37.365292 | orchestrator | 2025-04-01 19:58:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:37.365510 | orchestrator | 2025-04-01 19:58:37 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:37.365980 | orchestrator | 2025-04-01 19:58:37 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:37.366056 | orchestrator | 2025-04-01 19:58:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:37.366812 | orchestrator | 2025-04-01 19:58:37 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:37.367444 | orchestrator | 2025-04-01 19:58:37 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:40.410222 | orchestrator | 2025-04-01 19:58:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:40.410344 | orchestrator | 2025-04-01 19:58:40 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:40.411948 | orchestrator | 2025-04-01 19:58:40 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:40.413621 | orchestrator | 2025-04-01 19:58:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:40.414598 | orchestrator | 2025-04-01 19:58:40 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:40.415706 | orchestrator | 2025-04-01 19:58:40 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:40.416141 | orchestrator | 2025-04-01 19:58:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:43.471364 | orchestrator | 2025-04-01 19:58:43 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:43.472458 | orchestrator | 2025-04-01 19:58:43 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:43.474870 | orchestrator | 2025-04-01 19:58:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:43.476299 | orchestrator | 2025-04-01 19:58:43 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:43.477599 | orchestrator | 2025-04-01 19:58:43 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:43.477828 | orchestrator | 2025-04-01 19:58:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:46.534914 | orchestrator | 2025-04-01 19:58:46 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:46.536755 | orchestrator | 2025-04-01 19:58:46 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:46.538266 | orchestrator | 2025-04-01 19:58:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:46.540056 | orchestrator | 2025-04-01 19:58:46 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:46.541606 | orchestrator | 2025-04-01 19:58:46 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:49.593815 | orchestrator | 2025-04-01 19:58:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:49.593949 | orchestrator | 2025-04-01 19:58:49 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:49.594288 | orchestrator | 2025-04-01 19:58:49 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:49.596285 | orchestrator | 2025-04-01 19:58:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:49.597152 | orchestrator | 2025-04-01 19:58:49 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:49.599862 | orchestrator | 2025-04-01 19:58:49 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:52.657162 | orchestrator | 2025-04-01 19:58:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:52.657284 | orchestrator | 2025-04-01 19:58:52 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:52.659032 | orchestrator | 2025-04-01 19:58:52 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:52.661812 | orchestrator | 2025-04-01 19:58:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:52.662978 | orchestrator | 2025-04-01 19:58:52 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:52.664758 | orchestrator | 2025-04-01 19:58:52 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:55.710433 | orchestrator | 2025-04-01 19:58:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:55.710641 | orchestrator | 2025-04-01 19:58:55 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:55.711053 | orchestrator | 2025-04-01 19:58:55 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:55.711908 | orchestrator | 2025-04-01 19:58:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:55.711945 | orchestrator | 2025-04-01 19:58:55 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:55.712701 | orchestrator | 2025-04-01 19:58:55 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:58:55.712841 | orchestrator | 2025-04-01 19:58:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:58:58.763548 | orchestrator | 2025-04-01 19:58:58 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:58:58.765036 | orchestrator | 2025-04-01 19:58:58 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:58:58.767399 | orchestrator | 2025-04-01 19:58:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:58:58.769384 | orchestrator | 2025-04-01 19:58:58 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:58:58.770954 | orchestrator | 2025-04-01 19:58:58 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:59:01.831075 | orchestrator | 2025-04-01 19:58:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:01.831216 | orchestrator | 2025-04-01 19:59:01 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:01.835053 | orchestrator | 2025-04-01 19:59:01 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:59:01.836716 | orchestrator | 2025-04-01 19:59:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:01.839110 | orchestrator | 2025-04-01 19:59:01 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:01.840538 | orchestrator | 2025-04-01 19:59:01 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:59:04.888660 | orchestrator | 2025-04-01 19:59:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:04.888846 | orchestrator | 2025-04-01 19:59:04 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:04.894339 | orchestrator | 2025-04-01 19:59:04 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:59:04.895977 | orchestrator | 2025-04-01 19:59:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:04.896911 | orchestrator | 2025-04-01 19:59:04 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:04.897621 | orchestrator | 2025-04-01 19:59:04 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state STARTED 2025-04-01 19:59:07.941872 | orchestrator | 2025-04-01 19:59:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:07.942072 | orchestrator | 2025-04-01 19:59:07 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:07.943778 | orchestrator | 2025-04-01 19:59:07 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:07.944189 | orchestrator | 2025-04-01 19:59:07 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:59:07.945598 | orchestrator | 2025-04-01 19:59:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:07.946516 | orchestrator | 2025-04-01 19:59:07 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:07.953267 | orchestrator | 2025-04-01 19:59:07 | INFO  | Task 849c6dac-65fd-4985-bb71-5d8afd1e9ae3 is in state SUCCESS 2025-04-01 19:59:07.954994 | orchestrator | 2025-04-01 19:59:07.955036 | orchestrator | 2025-04-01 19:59:07.955052 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:59:07.955067 | orchestrator | 2025-04-01 19:59:07.955082 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:59:07.955096 | orchestrator | Tuesday 01 April 2025 19:57:33 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-04-01 19:59:07.955111 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:07.955127 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:07.955143 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:07.955158 | orchestrator | 2025-04-01 19:59:07.955321 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:59:07.955339 | orchestrator | Tuesday 01 April 2025 19:57:33 +0000 (0:00:00.433) 0:00:00.683 ********* 2025-04-01 19:59:07.955602 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-01 19:59:07.955619 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-01 19:59:07.955633 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-01 19:59:07.955647 | orchestrator | 2025-04-01 19:59:07.955662 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-04-01 19:59:07.955676 | orchestrator | 2025-04-01 19:59:07.955690 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-04-01 19:59:07.955704 | orchestrator | Tuesday 01 April 2025 19:57:34 +0000 (0:00:00.552) 0:00:01.236 ********* 2025-04-01 19:59:07.955760 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:07.956602 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:07.956885 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:07.956902 | orchestrator | 2025-04-01 19:59:07.956934 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:59:07.956987 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:59:07.957006 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:59:07.957022 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 19:59:07.957061 | orchestrator | 2025-04-01 19:59:07.957077 | orchestrator | 2025-04-01 19:59:07.957092 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:59:07.957107 | orchestrator | Tuesday 01 April 2025 19:57:35 +0000 (0:00:00.832) 0:00:02.068 ********* 2025-04-01 19:59:07.957122 | orchestrator | =============================================================================== 2025-04-01 19:59:07.957137 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.83s 2025-04-01 19:59:07.957152 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-04-01 19:59:07.957168 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-04-01 19:59:07.957182 | orchestrator | 2025-04-01 19:59:07.957197 | orchestrator | 2025-04-01 19:59:07.957212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:59:07.957227 | orchestrator | 2025-04-01 19:59:07.957242 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:59:07.957257 | orchestrator | Tuesday 01 April 2025 19:53:38 +0000 (0:00:00.390) 0:00:00.390 ********* 2025-04-01 19:59:07.957272 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:07.957287 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:07.957330 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:07.957345 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:59:07.957359 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:59:07.957373 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:59:07.957386 | orchestrator | 2025-04-01 19:59:07.957525 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:59:07.957545 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.984) 0:00:01.375 ********* 2025-04-01 19:59:07.957561 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-04-01 19:59:07.957577 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-04-01 19:59:07.957593 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-04-01 19:59:07.957608 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-04-01 19:59:07.957624 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-04-01 19:59:07.957639 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-04-01 19:59:07.957655 | orchestrator | 2025-04-01 19:59:07.957671 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-04-01 19:59:07.957687 | orchestrator | 2025-04-01 19:59:07.957703 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-01 19:59:07.957863 | orchestrator | Tuesday 01 April 2025 19:53:40 +0000 (0:00:00.874) 0:00:02.250 ********* 2025-04-01 19:59:07.958143 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:59:07.958171 | orchestrator | 2025-04-01 19:59:07.958188 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-04-01 19:59:07.958202 | orchestrator | Tuesday 01 April 2025 19:53:42 +0000 (0:00:01.466) 0:00:03.717 ********* 2025-04-01 19:59:07.958215 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:07.958229 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:07.958242 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:07.958254 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:59:07.958432 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:59:07.958450 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:59:07.958462 | orchestrator | 2025-04-01 19:59:07.958475 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-04-01 19:59:07.958488 | orchestrator | Tuesday 01 April 2025 19:53:43 +0000 (0:00:01.584) 0:00:05.301 ********* 2025-04-01 19:59:07.958500 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:07.958629 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:07.958644 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:07.958656 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:59:07.958669 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:59:07.958770 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:59:07.958787 | orchestrator | 2025-04-01 19:59:07.958800 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-04-01 19:59:07.958813 | orchestrator | Tuesday 01 April 2025 19:53:45 +0000 (0:00:01.239) 0:00:06.541 ********* 2025-04-01 19:59:07.958827 | orchestrator | ok: [testbed-node-0] => { 2025-04-01 19:59:07.958857 | orchestrator |  "changed": false, 2025-04-01 19:59:07.958869 | orchestrator |  "msg": "All assertions passed" 2025-04-01 19:59:07.958883 | orchestrator | } 2025-04-01 19:59:07.958895 | orchestrator | ok: [testbed-node-1] => { 2025-04-01 19:59:07.958908 | orchestrator |  "changed": false, 2025-04-01 19:59:07.958921 | orchestrator |  "msg": "All assertions passed" 2025-04-01 19:59:07.958933 | orchestrator | } 2025-04-01 19:59:07.958947 | orchestrator | ok: [testbed-node-2] => { 2025-04-01 19:59:07.958959 | orchestrator |  "changed": false, 2025-04-01 19:59:07.958972 | orchestrator |  "msg": "All assertions passed" 2025-04-01 19:59:07.958984 | orchestrator | } 2025-04-01 19:59:07.958997 | orchestrator | ok: [testbed-node-3] => { 2025-04-01 19:59:07.959009 | orchestrator |  "changed": false, 2025-04-01 19:59:07.959021 | orchestrator |  "msg": "All assertions passed" 2025-04-01 19:59:07.959033 | orchestrator | } 2025-04-01 19:59:07.959046 | orchestrator | ok: [testbed-node-4] => { 2025-04-01 19:59:07.959058 | orchestrator |  "changed": false, 2025-04-01 19:59:07.959070 | orchestrator |  "msg": "All assertions passed" 2025-04-01 19:59:07.959082 | orchestrator | } 2025-04-01 19:59:07.959095 | orchestrator | ok: [testbed-node-5] => { 2025-04-01 19:59:07.959107 | orchestrator |  "changed": false, 2025-04-01 19:59:07.959119 | orchestrator |  "msg": "All assertions passed" 2025-04-01 19:59:07.959131 | orchestrator | } 2025-04-01 19:59:07.959144 | orchestrator | 2025-04-01 19:59:07.959156 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-04-01 19:59:07.959169 | orchestrator | Tuesday 01 April 2025 19:53:46 +0000 (0:00:00.930) 0:00:07.471 ********* 2025-04-01 19:59:07.959181 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.959193 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.959205 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.959217 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.959230 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.959242 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.959254 | orchestrator | 2025-04-01 19:59:07.959306 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-04-01 19:59:07.959333 | orchestrator | Tuesday 01 April 2025 19:53:47 +0000 (0:00:01.053) 0:00:08.525 ********* 2025-04-01 19:59:07.959347 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-04-01 19:59:07.959361 | orchestrator | 2025-04-01 19:59:07.959374 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-04-01 19:59:07.959386 | orchestrator | Tuesday 01 April 2025 19:53:50 +0000 (0:00:03.054) 0:00:11.580 ********* 2025-04-01 19:59:07.959399 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-04-01 19:59:07.959413 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-04-01 19:59:07.959426 | orchestrator | 2025-04-01 19:59:07.959446 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-04-01 19:59:07.959459 | orchestrator | Tuesday 01 April 2025 19:53:57 +0000 (0:00:07.205) 0:00:18.785 ********* 2025-04-01 19:59:07.959471 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 19:59:07.959484 | orchestrator | 2025-04-01 19:59:07.959496 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-04-01 19:59:07.959509 | orchestrator | Tuesday 01 April 2025 19:54:00 +0000 (0:00:03.053) 0:00:21.838 ********* 2025-04-01 19:59:07.959521 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 19:59:07.959534 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-04-01 19:59:07.959546 | orchestrator | 2025-04-01 19:59:07.959568 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-04-01 19:59:07.959581 | orchestrator | Tuesday 01 April 2025 19:54:04 +0000 (0:00:03.805) 0:00:25.644 ********* 2025-04-01 19:59:07.959593 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 19:59:07.959606 | orchestrator | 2025-04-01 19:59:07.959618 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-04-01 19:59:07.959630 | orchestrator | Tuesday 01 April 2025 19:54:08 +0000 (0:00:03.989) 0:00:29.633 ********* 2025-04-01 19:59:07.959643 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-04-01 19:59:07.959666 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-04-01 19:59:07.959680 | orchestrator | 2025-04-01 19:59:07.959693 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-01 19:59:07.959705 | orchestrator | Tuesday 01 April 2025 19:54:17 +0000 (0:00:08.861) 0:00:38.494 ********* 2025-04-01 19:59:07.959735 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.959749 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.959762 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.959774 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.959787 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.959799 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.959939 | orchestrator | 2025-04-01 19:59:07.959953 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-04-01 19:59:07.959966 | orchestrator | Tuesday 01 April 2025 19:54:18 +0000 (0:00:01.091) 0:00:39.586 ********* 2025-04-01 19:59:07.959978 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.959991 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.960004 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.960016 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.960028 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.960041 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.960053 | orchestrator | 2025-04-01 19:59:07.960066 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-04-01 19:59:07.960078 | orchestrator | Tuesday 01 April 2025 19:54:23 +0000 (0:00:05.501) 0:00:45.088 ********* 2025-04-01 19:59:07.960091 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:07.960104 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:07.960117 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:07.960129 | orchestrator | ok: [testbed-node-3] 2025-04-01 19:59:07.960142 | orchestrator | ok: [testbed-node-4] 2025-04-01 19:59:07.960162 | orchestrator | ok: [testbed-node-5] 2025-04-01 19:59:07.960176 | orchestrator | 2025-04-01 19:59:07.960188 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-01 19:59:07.960201 | orchestrator | Tuesday 01 April 2025 19:54:25 +0000 (0:00:01.597) 0:00:46.686 ********* 2025-04-01 19:59:07.960214 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.960226 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.960239 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.960251 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.960264 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.960276 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.960289 | orchestrator | 2025-04-01 19:59:07.960301 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-04-01 19:59:07.960314 | orchestrator | Tuesday 01 April 2025 19:54:29 +0000 (0:00:03.749) 0:00:50.436 ********* 2025-04-01 19:59:07.960331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.960356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.960446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.960496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.960509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.960597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.960645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.960660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.960682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.960696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.960802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.960858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.960873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.960920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.960943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.960956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.960983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.961008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.961123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.961169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.961182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.961288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.961302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.961351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.961394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.961462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.961478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.961507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.961546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.961595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.961650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.961693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.961782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.961820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.961920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.961980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.961993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.962209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.962229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.962255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.962308 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.962322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.962370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.962394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.962428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.962448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.962461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.962506 | orchestrator | 2025-04-01 19:59:07.962520 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-04-01 19:59:07.962533 | orchestrator | Tuesday 01 April 2025 19:54:33 +0000 (0:00:04.494) 0:00:54.930 ********* 2025-04-01 19:59:07.962546 | orchestrator | [WARNING]: Skipped 2025-04-01 19:59:07.962559 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-04-01 19:59:07.962573 | orchestrator | due to this access issue: 2025-04-01 19:59:07.962596 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-04-01 19:59:07.962609 | orchestrator | a directory 2025-04-01 19:59:07.962622 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:59:07.962694 | orchestrator | 2025-04-01 19:59:07.962710 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-01 19:59:07.962741 | orchestrator | Tuesday 01 April 2025 19:54:35 +0000 (0:00:01.867) 0:00:56.798 ********* 2025-04-01 19:59:07.962755 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 19:59:07.962770 | orchestrator | 2025-04-01 19:59:07.962835 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-04-01 19:59:07.962849 | orchestrator | Tuesday 01 April 2025 19:54:37 +0000 (0:00:02.444) 0:00:59.242 ********* 2025-04-01 19:59:07.962862 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.962979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.963004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.963018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.963058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.963083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.963097 | orchestrator | 2025-04-01 19:59:07.963110 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-04-01 19:59:07.963123 | orchestrator | Tuesday 01 April 2025 19:54:44 +0000 (0:00:06.798) 0:01:06.041 ********* 2025-04-01 19:59:07.963176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.963192 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.963205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.963218 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.963238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.963258 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.963271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.963285 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.963309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.963323 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.963343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.963356 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.963369 | orchestrator | 2025-04-01 19:59:07.963381 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-04-01 19:59:07.963394 | orchestrator | Tuesday 01 April 2025 19:54:50 +0000 (0:00:05.454) 0:01:11.496 ********* 2025-04-01 19:59:07.963407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.963427 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.963440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.963453 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.963476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.963489 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.963510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.963523 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.963536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.963556 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.963569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.963581 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.963594 | orchestrator | 2025-04-01 19:59:07.963606 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-04-01 19:59:07.963624 | orchestrator | Tuesday 01 April 2025 19:54:56 +0000 (0:00:06.088) 0:01:17.584 ********* 2025-04-01 19:59:07.963636 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.963649 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.963661 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.963673 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.963686 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.963698 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.963710 | orchestrator | 2025-04-01 19:59:07.963779 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-04-01 19:59:07.963794 | orchestrator | Tuesday 01 April 2025 19:55:01 +0000 (0:00:05.375) 0:01:22.960 ********* 2025-04-01 19:59:07.963807 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.963819 | orchestrator | 2025-04-01 19:59:07.963832 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-04-01 19:59:07.963844 | orchestrator | Tuesday 01 April 2025 19:55:01 +0000 (0:00:00.145) 0:01:23.105 ********* 2025-04-01 19:59:07.963856 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.963869 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.963881 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.963894 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.963906 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.963918 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.963931 | orchestrator | 2025-04-01 19:59:07.963943 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-04-01 19:59:07.963956 | orchestrator | Tuesday 01 April 2025 19:55:02 +0000 (0:00:00.960) 0:01:24.065 ********* 2025-04-01 19:59:07.963969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.964002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.964064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.964154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.964181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.964244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.964257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.964270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964337 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.964350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.964376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.964468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.964494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.964557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.964571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964583 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.964596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.964618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.964686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.964780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.964817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.964886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.964911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.964924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.964975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.965604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.965636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.965650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.965664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.965715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.965755 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.965851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.965871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.965885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.965899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.965912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.965950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.966074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.966175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.966696 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.966740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.966756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.967246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.967289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.967318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.967415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.967451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.967541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.967567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.967581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.968065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968092 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.968104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.968131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.968258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.968479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.968806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.968842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.968864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.968875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.968922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.968932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.968949 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.968960 | orchestrator | 2025-04-01 19:59:07.968971 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-04-01 19:59:07.968995 | orchestrator | Tuesday 01 April 2025 19:55:08 +0000 (0:00:05.603) 0:01:29.668 ********* 2025-04-01 19:59:07.969006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.969023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.969102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.969196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.969207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.969284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.969384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.969394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.969416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.969486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.969515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.969679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.969716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.969786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.969808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.969830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.969887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.969929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.969970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.969992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.970051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.970127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.970156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.970206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.970217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.970253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.970312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.970381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.970411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.970457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.970469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.970496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.970524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.970570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.970581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.970617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.970640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.970692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.970704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970714 | orchestrator | 2025-04-01 19:59:07.970778 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-04-01 19:59:07.970789 | orchestrator | Tuesday 01 April 2025 19:55:13 +0000 (0:00:05.541) 0:01:35.210 ********* 2025-04-01 19:59:07.970800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.970818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.970881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.970924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.970942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.970986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.971071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.971152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.971217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.971298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.971351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.971402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.971424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.971465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.971475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.971507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.971607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.971672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.971681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.971690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.971702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_f2025-04-01 19:59:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:07.974104 | orchestrator | iles/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.974167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.974177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.974261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.974349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.974396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.974426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.974465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.974496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.974547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.974566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.974627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.974658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.974685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.974761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974781 | orchestrator | 2025-04-01 19:59:07.974791 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-04-01 19:59:07.974800 | orchestrator | Tuesday 01 April 2025 19:55:23 +0000 (0:00:09.620) 0:01:44.831 ********* 2025-04-01 19:59:07.974810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.974828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.974872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.974940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.974959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.974974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.974997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.975008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.975049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.975093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.975183 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.975228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975269 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.975279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.975301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.975345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.975429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.975480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975500 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.975509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.975532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.975576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.975603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.975758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.975786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.975816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.975910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.975926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.975968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.975989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.975998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976153 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.976167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.976230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.976246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.976303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.976333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976360 | orchestrator | 2025-04-01 19:59:07.976370 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-04-01 19:59:07.976378 | orchestrator | Tuesday 01 April 2025 19:55:27 +0000 (0:00:03.729) 0:01:48.560 ********* 2025-04-01 19:59:07.976387 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.976396 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.976404 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:07.976413 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.976422 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:59:07.976435 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:59:07.976444 | orchestrator | 2025-04-01 19:59:07.976453 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-04-01 19:59:07.976461 | orchestrator | Tuesday 01 April 2025 19:55:34 +0000 (0:00:07.409) 0:01:55.969 ********* 2025-04-01 19:59:07.976474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.976484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.976536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.976616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.976652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976675 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.976689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.976698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.976768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.976851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.976863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.976890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.976899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976913 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.976922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.976935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.976983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.976993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.977063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.977103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977126 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.977135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.977147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.977196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.977276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.977319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.977352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.977404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.977434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.977628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.977637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.977712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.977803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.977812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.977841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.977851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.977864 | orchestrator | 2025-04-01 19:59:07.977873 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-04-01 19:59:07.977882 | orchestrator | Tuesday 01 April 2025 19:55:39 +0000 (0:00:05.254) 0:02:01.224 ********* 2025-04-01 19:59:07.977891 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.977900 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.977908 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.977917 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.977926 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.977934 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.977943 | orchestrator | 2025-04-01 19:59:07.977952 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-04-01 19:59:07.977961 | orchestrator | Tuesday 01 April 2025 19:55:43 +0000 (0:00:03.916) 0:02:05.140 ********* 2025-04-01 19:59:07.977988 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.977997 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.978006 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.978038 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.978054 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.978063 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.978071 | orchestrator | 2025-04-01 19:59:07.978080 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-04-01 19:59:07.978090 | orchestrator | Tuesday 01 April 2025 19:55:46 +0000 (0:00:03.084) 0:02:08.225 ********* 2025-04-01 19:59:07.978100 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.978109 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.978118 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.978128 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.978136 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.978145 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.978156 | orchestrator | 2025-04-01 19:59:07.978167 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-04-01 19:59:07.978177 | orchestrator | Tuesday 01 April 2025 19:55:51 +0000 (0:00:04.604) 0:02:12.830 ********* 2025-04-01 19:59:07.978187 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.978197 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.978206 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.978215 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.978225 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.978235 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.978244 | orchestrator | 2025-04-01 19:59:07.978254 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-04-01 19:59:07.978263 | orchestrator | Tuesday 01 April 2025 19:55:53 +0000 (0:00:02.055) 0:02:14.885 ********* 2025-04-01 19:59:07.978273 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.978282 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.978292 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.978302 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.978311 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.978321 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.978330 | orchestrator | 2025-04-01 19:59:07.978340 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-04-01 19:59:07.978350 | orchestrator | Tuesday 01 April 2025 19:55:56 +0000 (0:00:02.806) 0:02:17.692 ********* 2025-04-01 19:59:07.978359 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.978368 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.978378 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.978387 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.978396 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.978414 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.978424 | orchestrator | 2025-04-01 19:59:07.978433 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-04-01 19:59:07.978443 | orchestrator | Tuesday 01 April 2025 19:56:00 +0000 (0:00:04.129) 0:02:21.822 ********* 2025-04-01 19:59:07.978452 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-01 19:59:07.978463 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.978473 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-01 19:59:07.978482 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.978492 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-01 19:59:07.978502 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.978511 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-01 19:59:07.978520 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.978528 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-01 19:59:07.978537 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.978546 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-01 19:59:07.978567 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.978576 | orchestrator | 2025-04-01 19:59:07.978585 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-04-01 19:59:07.978597 | orchestrator | Tuesday 01 April 2025 19:56:04 +0000 (0:00:04.020) 0:02:25.842 ********* 2025-04-01 19:59:07.978606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.978616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.978663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.978705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.978761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.978770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.978801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.978810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.978842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.978851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.978874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.978889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.978902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.978911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.978944 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.978953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.978974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.978990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.978999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.979013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.979041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979060 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.979069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.979101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.979189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.979226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979244 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.979261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.979275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.979316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.979370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.979440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.979489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.979503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979558 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.979572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.979613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.979629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.979660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.979669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979678 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.979693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.979931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.979951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.980103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.980206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.980224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.980304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.980316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980324 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.980332 | orchestrator | 2025-04-01 19:59:07.980341 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-04-01 19:59:07.980349 | orchestrator | Tuesday 01 April 2025 19:56:07 +0000 (0:00:02.761) 0:02:28.603 ********* 2025-04-01 19:59:07.980357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.980366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.980449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.980550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.980567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.980576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.980706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.980795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.980814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.980857 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.980907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.980938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.980946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.980955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.980969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981033 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.981042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.981050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.981064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.981214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.981226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.981396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.981483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.981506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.981602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.981611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.981761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981879 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.981886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.981938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.981949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.981970 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.981978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.981985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.981993 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.982068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.982111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.982144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.982152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.982184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.982198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.982220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.982243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.982251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.982259 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982267 | orchestrator | 2025-04-01 19:59:07.982275 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-04-01 19:59:07.982282 | orchestrator | Tuesday 01 April 2025 19:56:11 +0000 (0:00:04.424) 0:02:33.027 ********* 2025-04-01 19:59:07.982290 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982297 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982304 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982312 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982319 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982326 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982333 | orchestrator | 2025-04-01 19:59:07.982341 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-04-01 19:59:07.982348 | orchestrator | Tuesday 01 April 2025 19:56:14 +0000 (0:00:03.269) 0:02:36.297 ********* 2025-04-01 19:59:07.982355 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982363 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982372 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982379 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:59:07.982387 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:59:07.982394 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:59:07.982401 | orchestrator | 2025-04-01 19:59:07.982408 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-04-01 19:59:07.982416 | orchestrator | Tuesday 01 April 2025 19:56:24 +0000 (0:00:09.267) 0:02:45.565 ********* 2025-04-01 19:59:07.982423 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982430 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982437 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982451 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982458 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982465 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982472 | orchestrator | 2025-04-01 19:59:07.982479 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-04-01 19:59:07.982487 | orchestrator | Tuesday 01 April 2025 19:56:27 +0000 (0:00:03.109) 0:02:48.674 ********* 2025-04-01 19:59:07.982494 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982501 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982508 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982529 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982537 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982545 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982552 | orchestrator | 2025-04-01 19:59:07.982559 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-04-01 19:59:07.982566 | orchestrator | Tuesday 01 April 2025 19:56:31 +0000 (0:00:04.087) 0:02:52.762 ********* 2025-04-01 19:59:07.982574 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982583 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982591 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982599 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982607 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982614 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982623 | orchestrator | 2025-04-01 19:59:07.982631 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-04-01 19:59:07.982639 | orchestrator | Tuesday 01 April 2025 19:56:34 +0000 (0:00:03.344) 0:02:56.106 ********* 2025-04-01 19:59:07.982647 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982655 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982663 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982672 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982680 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982688 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982696 | orchestrator | 2025-04-01 19:59:07.982704 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-04-01 19:59:07.982712 | orchestrator | Tuesday 01 April 2025 19:56:38 +0000 (0:00:03.822) 0:02:59.929 ********* 2025-04-01 19:59:07.982735 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982743 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982751 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982759 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982767 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982774 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982782 | orchestrator | 2025-04-01 19:59:07.982790 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-04-01 19:59:07.982798 | orchestrator | Tuesday 01 April 2025 19:56:43 +0000 (0:00:04.785) 0:03:04.715 ********* 2025-04-01 19:59:07.982806 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982814 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982822 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982829 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982837 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982845 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982853 | orchestrator | 2025-04-01 19:59:07.982861 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-04-01 19:59:07.982869 | orchestrator | Tuesday 01 April 2025 19:56:50 +0000 (0:00:07.106) 0:03:11.821 ********* 2025-04-01 19:59:07.982877 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982885 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982892 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982900 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982908 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982921 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982930 | orchestrator | 2025-04-01 19:59:07.982937 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-04-01 19:59:07.982944 | orchestrator | Tuesday 01 April 2025 19:56:55 +0000 (0:00:04.873) 0:03:16.694 ********* 2025-04-01 19:59:07.982951 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.982958 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.982965 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.982972 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.982984 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.982991 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.982998 | orchestrator | 2025-04-01 19:59:07.983005 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-04-01 19:59:07.983012 | orchestrator | Tuesday 01 April 2025 19:56:59 +0000 (0:00:04.312) 0:03:21.007 ********* 2025-04-01 19:59:07.983019 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-01 19:59:07.983027 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.983034 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-01 19:59:07.983041 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.983048 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-01 19:59:07.983055 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.983062 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-01 19:59:07.983069 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.983076 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-01 19:59:07.983083 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.983090 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-01 19:59:07.983097 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.983104 | orchestrator | 2025-04-01 19:59:07.983111 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-04-01 19:59:07.983118 | orchestrator | Tuesday 01 April 2025 19:57:02 +0000 (0:00:02.819) 0:03:23.827 ********* 2025-04-01 19:59:07.983141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.983149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.983198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.983272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.983301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.983324 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.983346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.983381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.983451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.983493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983508 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.983515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.983536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.983570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.983646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.983688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983703 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.983710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.983717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.983781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.983837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.983852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.983899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.983914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.983948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.983963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.983989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.984033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.984052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.984082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.984146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984172 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.984180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984216 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.984223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.984231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.984272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984302 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.984309 | orchestrator | 2025-04-01 19:59:07.984316 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-04-01 19:59:07.984323 | orchestrator | Tuesday 01 April 2025 19:57:05 +0000 (0:00:02.671) 0:03:26.498 ********* 2025-04-01 19:59:07.984331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.984338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.984387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.984458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.984493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.984544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.984551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.984630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-01 19:59:07.984637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.984649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.984792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.984810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.984843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.984902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.984954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.984961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.984969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.984976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.984984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.985013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.985021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.985028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-01 19:59:07.985064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-01 19:59:07.985099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.985122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.985130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.985144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.985167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.985179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.985194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.985201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.985231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.985246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.985254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.985275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.985291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-01 19:59:07.985306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 19:59:07.985320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 19:59:07.985334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-01 19:59:07.985356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-01 19:59:07.985363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-01 19:59:07.985370 | orchestrator | 2025-04-01 19:59:07.985378 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-01 19:59:07.985385 | orchestrator | Tuesday 01 April 2025 19:57:09 +0000 (0:00:04.000) 0:03:30.498 ********* 2025-04-01 19:59:07.985392 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:07.985399 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:07.985406 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:07.985413 | orchestrator | skipping: [testbed-node-3] 2025-04-01 19:59:07.985420 | orchestrator | skipping: [testbed-node-4] 2025-04-01 19:59:07.985427 | orchestrator | skipping: [testbed-node-5] 2025-04-01 19:59:07.985434 | orchestrator | 2025-04-01 19:59:07.985442 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-04-01 19:59:07.985452 | orchestrator | Tuesday 01 April 2025 19:57:09 +0000 (0:00:00.863) 0:03:31.362 ********* 2025-04-01 19:59:07.985459 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:07.985466 | orchestrator | 2025-04-01 19:59:07.985473 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-04-01 19:59:07.985480 | orchestrator | Tuesday 01 April 2025 19:57:12 +0000 (0:00:02.977) 0:03:34.339 ********* 2025-04-01 19:59:07.985487 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:07.985494 | orchestrator | 2025-04-01 19:59:07.985501 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-04-01 19:59:07.985509 | orchestrator | Tuesday 01 April 2025 19:57:14 +0000 (0:00:01.983) 0:03:36.323 ********* 2025-04-01 19:59:07.985520 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:07.985527 | orchestrator | 2025-04-01 19:59:07.985534 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-01 19:59:07.985541 | orchestrator | Tuesday 01 April 2025 19:57:48 +0000 (0:00:33.454) 0:04:09.777 ********* 2025-04-01 19:59:07.985548 | orchestrator | 2025-04-01 19:59:07.985555 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-01 19:59:07.985562 | orchestrator | Tuesday 01 April 2025 19:57:48 +0000 (0:00:00.074) 0:04:09.851 ********* 2025-04-01 19:59:07.985569 | orchestrator | 2025-04-01 19:59:07.985576 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-01 19:59:07.985583 | orchestrator | Tuesday 01 April 2025 19:57:48 +0000 (0:00:00.380) 0:04:10.231 ********* 2025-04-01 19:59:07.985590 | orchestrator | 2025-04-01 19:59:07.985597 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-01 19:59:07.985604 | orchestrator | Tuesday 01 April 2025 19:57:48 +0000 (0:00:00.101) 0:04:10.333 ********* 2025-04-01 19:59:07.985611 | orchestrator | 2025-04-01 19:59:07.985618 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-01 19:59:07.985625 | orchestrator | Tuesday 01 April 2025 19:57:49 +0000 (0:00:00.148) 0:04:10.482 ********* 2025-04-01 19:59:07.985632 | orchestrator | 2025-04-01 19:59:07.985639 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-01 19:59:07.985646 | orchestrator | Tuesday 01 April 2025 19:57:49 +0000 (0:00:00.106) 0:04:10.588 ********* 2025-04-01 19:59:07.985653 | orchestrator | 2025-04-01 19:59:07.985660 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-04-01 19:59:07.985666 | orchestrator | Tuesday 01 April 2025 19:57:49 +0000 (0:00:00.502) 0:04:11.091 ********* 2025-04-01 19:59:07.985673 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:07.985680 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:59:07.985687 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:59:07.985694 | orchestrator | 2025-04-01 19:59:07.985702 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-04-01 19:59:07.985712 | orchestrator | Tuesday 01 April 2025 19:58:14 +0000 (0:00:24.459) 0:04:35.550 ********* 2025-04-01 19:59:11.011065 | orchestrator | changed: [testbed-node-3] 2025-04-01 19:59:11.011249 | orchestrator | changed: [testbed-node-4] 2025-04-01 19:59:11.011285 | orchestrator | changed: [testbed-node-5] 2025-04-01 19:59:11.011312 | orchestrator | 2025-04-01 19:59:11.011339 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:59:11.011368 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-01 19:59:11.011398 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-01 19:59:11.011422 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-01 19:59:11.011448 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-01 19:59:11.011472 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-01 19:59:11.011497 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-01 19:59:11.011513 | orchestrator | 2025-04-01 19:59:11.011527 | orchestrator | 2025-04-01 19:59:11.011544 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:59:11.011561 | orchestrator | Tuesday 01 April 2025 19:59:05 +0000 (0:00:50.894) 0:05:26.444 ********* 2025-04-01 19:59:11.011577 | orchestrator | =============================================================================== 2025-04-01 19:59:11.011640 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.89s 2025-04-01 19:59:11.011657 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 33.45s 2025-04-01 19:59:11.011671 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.46s 2025-04-01 19:59:11.011685 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.62s 2025-04-01 19:59:11.011699 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 9.27s 2025-04-01 19:59:11.011919 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.86s 2025-04-01 19:59:11.011935 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 7.41s 2025-04-01 19:59:11.011951 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.21s 2025-04-01 19:59:11.011964 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 7.11s 2025-04-01 19:59:11.011977 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 6.80s 2025-04-01 19:59:11.011991 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 6.09s 2025-04-01 19:59:11.012005 | orchestrator | neutron : Copying over existing policy file ----------------------------- 5.60s 2025-04-01 19:59:11.012018 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.54s 2025-04-01 19:59:11.012031 | orchestrator | Load and persist kernel modules ----------------------------------------- 5.50s 2025-04-01 19:59:11.012044 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.45s 2025-04-01 19:59:11.012059 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 5.38s 2025-04-01 19:59:11.012072 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.25s 2025-04-01 19:59:11.012085 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.87s 2025-04-01 19:59:11.012098 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.79s 2025-04-01 19:59:11.012111 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.60s 2025-04-01 19:59:11.012144 | orchestrator | 2025-04-01 19:59:11 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:11.013107 | orchestrator | 2025-04-01 19:59:11 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:11.013137 | orchestrator | 2025-04-01 19:59:11 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:59:11.014583 | orchestrator | 2025-04-01 19:59:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:11.016153 | orchestrator | 2025-04-01 19:59:11 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:14.071449 | orchestrator | 2025-04-01 19:59:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:14.071551 | orchestrator | 2025-04-01 19:59:14 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:14.071874 | orchestrator | 2025-04-01 19:59:14 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:14.073622 | orchestrator | 2025-04-01 19:59:14 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:59:14.074253 | orchestrator | 2025-04-01 19:59:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:14.075125 | orchestrator | 2025-04-01 19:59:14 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:17.125195 | orchestrator | 2025-04-01 19:59:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:17.125329 | orchestrator | 2025-04-01 19:59:17 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:17.126201 | orchestrator | 2025-04-01 19:59:17 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:17.127983 | orchestrator | 2025-04-01 19:59:17 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state STARTED 2025-04-01 19:59:17.128956 | orchestrator | 2025-04-01 19:59:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:17.130429 | orchestrator | 2025-04-01 19:59:17 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:20.195033 | orchestrator | 2025-04-01 19:59:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:20.195182 | orchestrator | 2025-04-01 19:59:20 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:20.198128 | orchestrator | 2025-04-01 19:59:20 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:20.198442 | orchestrator | 2025-04-01 19:59:20 | INFO  | Task c0884597-2257-4757-8a75-f3a9dc8842c0 is in state SUCCESS 2025-04-01 19:59:20.198476 | orchestrator | 2025-04-01 19:59:20.198493 | orchestrator | 2025-04-01 19:59:20.198509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 19:59:20.198524 | orchestrator | 2025-04-01 19:59:20.198539 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 19:59:20.198555 | orchestrator | Tuesday 01 April 2025 19:57:16 +0000 (0:00:00.367) 0:00:00.367 ********* 2025-04-01 19:59:20.198569 | orchestrator | ok: [testbed-node-0] 2025-04-01 19:59:20.198585 | orchestrator | ok: [testbed-node-1] 2025-04-01 19:59:20.198600 | orchestrator | ok: [testbed-node-2] 2025-04-01 19:59:20.198615 | orchestrator | 2025-04-01 19:59:20.198630 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 19:59:20.198645 | orchestrator | Tuesday 01 April 2025 19:57:17 +0000 (0:00:00.467) 0:00:00.835 ********* 2025-04-01 19:59:20.198659 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-04-01 19:59:20.198674 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-04-01 19:59:20.198689 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-04-01 19:59:20.198703 | orchestrator | 2025-04-01 19:59:20.198718 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-04-01 19:59:20.198754 | orchestrator | 2025-04-01 19:59:20.198769 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-01 19:59:20.198783 | orchestrator | Tuesday 01 April 2025 19:57:17 +0000 (0:00:00.378) 0:00:01.213 ********* 2025-04-01 19:59:20.198797 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:59:20.198812 | orchestrator | 2025-04-01 19:59:20.198826 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-04-01 19:59:20.198841 | orchestrator | Tuesday 01 April 2025 19:57:18 +0000 (0:00:00.933) 0:00:02.147 ********* 2025-04-01 19:59:20.198855 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-04-01 19:59:20.198869 | orchestrator | 2025-04-01 19:59:20.198883 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-04-01 19:59:20.198897 | orchestrator | Tuesday 01 April 2025 19:57:21 +0000 (0:00:03.519) 0:00:05.667 ********* 2025-04-01 19:59:20.198911 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-04-01 19:59:20.198926 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-04-01 19:59:20.198940 | orchestrator | 2025-04-01 19:59:20.198954 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-04-01 19:59:20.198968 | orchestrator | Tuesday 01 April 2025 19:57:28 +0000 (0:00:06.559) 0:00:12.227 ********* 2025-04-01 19:59:20.198983 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 19:59:20.199022 | orchestrator | 2025-04-01 19:59:20.199036 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-04-01 19:59:20.199052 | orchestrator | Tuesday 01 April 2025 19:57:32 +0000 (0:00:03.914) 0:00:16.142 ********* 2025-04-01 19:59:20.199068 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 19:59:20.199098 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-04-01 19:59:20.199115 | orchestrator | 2025-04-01 19:59:20.199131 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-04-01 19:59:20.199146 | orchestrator | Tuesday 01 April 2025 19:57:35 +0000 (0:00:03.377) 0:00:19.519 ********* 2025-04-01 19:59:20.199162 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 19:59:20.199177 | orchestrator | 2025-04-01 19:59:20.199199 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-04-01 19:59:20.199215 | orchestrator | Tuesday 01 April 2025 19:57:39 +0000 (0:00:04.045) 0:00:23.565 ********* 2025-04-01 19:59:20.199230 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-04-01 19:59:20.199245 | orchestrator | 2025-04-01 19:59:20.199261 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-04-01 19:59:20.199276 | orchestrator | Tuesday 01 April 2025 19:57:43 +0000 (0:00:04.107) 0:00:27.672 ********* 2025-04-01 19:59:20.199291 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.199306 | orchestrator | 2025-04-01 19:59:20.199322 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-04-01 19:59:20.199338 | orchestrator | Tuesday 01 April 2025 19:57:47 +0000 (0:00:03.272) 0:00:30.945 ********* 2025-04-01 19:59:20.199353 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.199368 | orchestrator | 2025-04-01 19:59:20.199384 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-04-01 19:59:20.199399 | orchestrator | Tuesday 01 April 2025 19:57:51 +0000 (0:00:04.448) 0:00:35.393 ********* 2025-04-01 19:59:20.199414 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.199429 | orchestrator | 2025-04-01 19:59:20.199443 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-04-01 19:59:20.199457 | orchestrator | Tuesday 01 April 2025 19:57:55 +0000 (0:00:04.174) 0:00:39.568 ********* 2025-04-01 19:59:20.199488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.199540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.199565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.199581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.199597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.199626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.199642 | orchestrator | 2025-04-01 19:59:20.199656 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-04-01 19:59:20.199671 | orchestrator | Tuesday 01 April 2025 19:57:59 +0000 (0:00:03.574) 0:00:43.143 ********* 2025-04-01 19:59:20.199685 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.199699 | orchestrator | 2025-04-01 19:59:20.199713 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-04-01 19:59:20.199748 | orchestrator | Tuesday 01 April 2025 19:57:59 +0000 (0:00:00.263) 0:00:43.406 ********* 2025-04-01 19:59:20.199763 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.199778 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.199792 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.199813 | orchestrator | 2025-04-01 19:59:20.199828 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-04-01 19:59:20.199842 | orchestrator | Tuesday 01 April 2025 19:58:00 +0000 (0:00:00.697) 0:00:44.104 ********* 2025-04-01 19:59:20.199856 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 19:59:20.199870 | orchestrator | 2025-04-01 19:59:20.199883 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-04-01 19:59:20.199897 | orchestrator | Tuesday 01 April 2025 19:58:01 +0000 (0:00:01.054) 0:00:45.158 ********* 2025-04-01 19:59:20.199924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.199940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.199956 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.199970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.199993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.200015 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.200041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.200057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.200071 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.200086 | orchestrator | 2025-04-01 19:59:20.200100 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-04-01 19:59:20.200114 | orchestrator | Tuesday 01 April 2025 19:58:03 +0000 (0:00:02.462) 0:00:47.620 ********* 2025-04-01 19:59:20.200128 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.200249 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.200268 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.200282 | orchestrator | 2025-04-01 19:59:20.200297 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-01 19:59:20.200312 | orchestrator | Tuesday 01 April 2025 19:58:04 +0000 (0:00:00.310) 0:00:47.930 ********* 2025-04-01 19:59:20.200327 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 19:59:20.200341 | orchestrator | 2025-04-01 19:59:20.200356 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-04-01 19:59:20.200370 | orchestrator | Tuesday 01 April 2025 19:58:05 +0000 (0:00:00.819) 0:00:48.750 ********* 2025-04-01 19:59:20.200385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.200412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.200439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.200469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.200486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.200502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.200517 | orchestrator | 2025-04-01 19:59:20.200532 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-04-01 19:59:20.200640 | orchestrator | Tuesday 01 April 2025 19:58:08 +0000 (0:00:03.362) 0:00:52.113 ********* 2025-04-01 19:59:20.200668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.200685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.200701 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.200750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.200769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.200784 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.200799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.200836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.200851 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.200866 | orchestrator | 2025-04-01 19:59:20.200880 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-04-01 19:59:20.200894 | orchestrator | Tuesday 01 April 2025 19:58:09 +0000 (0:00:01.394) 0:00:53.507 ********* 2025-04-01 19:59:20.200921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.200937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.200952 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.200966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.200994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.201010 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.201041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.201057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.201071 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.201085 | orchestrator | 2025-04-01 19:59:20.201099 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-04-01 19:59:20.201113 | orchestrator | Tuesday 01 April 2025 19:58:11 +0000 (0:00:01.586) 0:00:55.094 ********* 2025-04-01 19:59:20.201128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201251 | orchestrator | 2025-04-01 19:59:20.201267 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-04-01 19:59:20.201283 | orchestrator | Tuesday 01 April 2025 19:58:13 +0000 (0:00:02.555) 0:00:57.649 ********* 2025-04-01 19:59:20.201299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201439 | orchestrator | 2025-04-01 19:59:20.201454 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-04-01 19:59:20.201476 | orchestrator | Tuesday 01 April 2025 19:58:29 +0000 (0:00:15.968) 0:01:13.618 ********* 2025-04-01 19:59:20.201493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.201508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.201524 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.201540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.201573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.201590 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.201613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-01 19:59:20.201628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-01 19:59:20.201643 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.201657 | orchestrator | 2025-04-01 19:59:20.201671 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-04-01 19:59:20.201685 | orchestrator | Tuesday 01 April 2025 19:58:31 +0000 (0:00:01.949) 0:01:15.567 ********* 2025-04-01 19:59:20.201700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-01 19:59:20.201808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 19:59:20.201859 | orchestrator | 2025-04-01 19:59:20.201873 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-01 19:59:20.201887 | orchestrator | Tuesday 01 April 2025 19:58:35 +0000 (0:00:03.921) 0:01:19.488 ********* 2025-04-01 19:59:20.201902 | orchestrator | skipping: [testbed-node-0] 2025-04-01 19:59:20.201916 | orchestrator | skipping: [testbed-node-1] 2025-04-01 19:59:20.201929 | orchestrator | skipping: [testbed-node-2] 2025-04-01 19:59:20.201943 | orchestrator | 2025-04-01 19:59:20.201957 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-04-01 19:59:20.201972 | orchestrator | Tuesday 01 April 2025 19:58:36 +0000 (0:00:01.054) 0:01:20.543 ********* 2025-04-01 19:59:20.201986 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.202000 | orchestrator | 2025-04-01 19:59:20.202061 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-04-01 19:59:20.202079 | orchestrator | Tuesday 01 April 2025 19:58:39 +0000 (0:00:03.022) 0:01:23.565 ********* 2025-04-01 19:59:20.202093 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.202107 | orchestrator | 2025-04-01 19:59:20.202121 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-04-01 19:59:20.202135 | orchestrator | Tuesday 01 April 2025 19:58:42 +0000 (0:00:02.656) 0:01:26.222 ********* 2025-04-01 19:59:20.202149 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.202163 | orchestrator | 2025-04-01 19:59:20.202178 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-01 19:59:20.202192 | orchestrator | Tuesday 01 April 2025 19:58:53 +0000 (0:00:11.225) 0:01:37.448 ********* 2025-04-01 19:59:20.202205 | orchestrator | 2025-04-01 19:59:20.202220 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-01 19:59:20.202234 | orchestrator | Tuesday 01 April 2025 19:58:53 +0000 (0:00:00.109) 0:01:37.558 ********* 2025-04-01 19:59:20.202247 | orchestrator | 2025-04-01 19:59:20.202261 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-01 19:59:20.202275 | orchestrator | Tuesday 01 April 2025 19:58:54 +0000 (0:00:00.217) 0:01:37.776 ********* 2025-04-01 19:59:20.202289 | orchestrator | 2025-04-01 19:59:20.202303 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-04-01 19:59:20.202318 | orchestrator | Tuesday 01 April 2025 19:58:54 +0000 (0:00:00.062) 0:01:37.838 ********* 2025-04-01 19:59:20.202331 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.202345 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:59:20.202359 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:59:20.202373 | orchestrator | 2025-04-01 19:59:20.202387 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-04-01 19:59:20.202401 | orchestrator | Tuesday 01 April 2025 19:59:09 +0000 (0:00:15.433) 0:01:53.271 ********* 2025-04-01 19:59:20.202415 | orchestrator | changed: [testbed-node-0] 2025-04-01 19:59:20.202429 | orchestrator | changed: [testbed-node-1] 2025-04-01 19:59:20.202443 | orchestrator | changed: [testbed-node-2] 2025-04-01 19:59:20.202457 | orchestrator | 2025-04-01 19:59:20.202471 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 19:59:20.202491 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-01 19:59:23.261313 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 19:59:23.261432 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 19:59:23.261451 | orchestrator | 2025-04-01 19:59:23.261467 | orchestrator | 2025-04-01 19:59:23.261482 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 19:59:23.261525 | orchestrator | Tuesday 01 April 2025 19:59:19 +0000 (0:00:09.863) 0:02:03.135 ********* 2025-04-01 19:59:23.261540 | orchestrator | =============================================================================== 2025-04-01 19:59:23.261554 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 15.97s 2025-04-01 19:59:23.261568 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.43s 2025-04-01 19:59:23.261582 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 11.23s 2025-04-01 19:59:23.261709 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.86s 2025-04-01 19:59:23.261776 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.56s 2025-04-01 19:59:23.261792 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.45s 2025-04-01 19:59:23.261820 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.18s 2025-04-01 19:59:23.261835 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.11s 2025-04-01 19:59:23.261850 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.05s 2025-04-01 19:59:23.261864 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.92s 2025-04-01 19:59:23.261878 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.91s 2025-04-01 19:59:23.261892 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.57s 2025-04-01 19:59:23.261906 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.52s 2025-04-01 19:59:23.261920 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.38s 2025-04-01 19:59:23.261933 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.36s 2025-04-01 19:59:23.261947 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.27s 2025-04-01 19:59:23.261961 | orchestrator | magnum : Creating Magnum database --------------------------------------- 3.02s 2025-04-01 19:59:23.261975 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.66s 2025-04-01 19:59:23.261989 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.56s 2025-04-01 19:59:23.262003 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.46s 2025-04-01 19:59:23.262067 | orchestrator | 2025-04-01 19:59:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:23.262085 | orchestrator | 2025-04-01 19:59:20 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:23.262099 | orchestrator | 2025-04-01 19:59:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:23.262219 | orchestrator | 2025-04-01 19:59:23 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:23.262242 | orchestrator | 2025-04-01 19:59:23 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:23.262263 | orchestrator | 2025-04-01 19:59:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:23.263163 | orchestrator | 2025-04-01 19:59:23 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:23.264001 | orchestrator | 2025-04-01 19:59:23 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:26.318860 | orchestrator | 2025-04-01 19:59:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:26.318999 | orchestrator | 2025-04-01 19:59:26 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:26.321504 | orchestrator | 2025-04-01 19:59:26 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:26.325594 | orchestrator | 2025-04-01 19:59:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:26.327479 | orchestrator | 2025-04-01 19:59:26 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:26.327516 | orchestrator | 2025-04-01 19:59:26 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:29.393499 | orchestrator | 2025-04-01 19:59:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:29.393627 | orchestrator | 2025-04-01 19:59:29 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:29.394481 | orchestrator | 2025-04-01 19:59:29 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:29.394521 | orchestrator | 2025-04-01 19:59:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:29.395314 | orchestrator | 2025-04-01 19:59:29 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:29.396062 | orchestrator | 2025-04-01 19:59:29 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:29.396189 | orchestrator | 2025-04-01 19:59:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:32.447228 | orchestrator | 2025-04-01 19:59:32 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:32.449086 | orchestrator | 2025-04-01 19:59:32 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:32.453772 | orchestrator | 2025-04-01 19:59:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:32.455462 | orchestrator | 2025-04-01 19:59:32 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:32.457936 | orchestrator | 2025-04-01 19:59:32 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:32.458720 | orchestrator | 2025-04-01 19:59:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:35.500813 | orchestrator | 2025-04-01 19:59:35 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:35.505292 | orchestrator | 2025-04-01 19:59:35 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:35.506556 | orchestrator | 2025-04-01 19:59:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:35.507272 | orchestrator | 2025-04-01 19:59:35 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:35.507903 | orchestrator | 2025-04-01 19:59:35 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:38.552103 | orchestrator | 2025-04-01 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:38.552227 | orchestrator | 2025-04-01 19:59:38 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:38.554935 | orchestrator | 2025-04-01 19:59:38 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:41.598537 | orchestrator | 2025-04-01 19:59:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:41.598889 | orchestrator | 2025-04-01 19:59:38 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:41.598923 | orchestrator | 2025-04-01 19:59:38 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:41.598939 | orchestrator | 2025-04-01 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:41.598974 | orchestrator | 2025-04-01 19:59:41 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state STARTED 2025-04-01 19:59:41.600351 | orchestrator | 2025-04-01 19:59:41 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:41.600394 | orchestrator | 2025-04-01 19:59:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:41.601808 | orchestrator | 2025-04-01 19:59:41 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:41.602919 | orchestrator | 2025-04-01 19:59:41 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:44.658009 | orchestrator | 2025-04-01 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:44.658357 | orchestrator | 2025-04-01 19:59:44 | INFO  | Task ee339fd5-f3af-4161-95ea-1bdbea52a2af is in state SUCCESS 2025-04-01 19:59:44.659248 | orchestrator | 2025-04-01 19:59:44 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:44.659284 | orchestrator | 2025-04-01 19:59:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:44.661209 | orchestrator | 2025-04-01 19:59:44 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:44.662327 | orchestrator | 2025-04-01 19:59:44 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:47.712413 | orchestrator | 2025-04-01 19:59:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:47.712576 | orchestrator | 2025-04-01 19:59:47 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state STARTED 2025-04-01 19:59:47.715160 | orchestrator | 2025-04-01 19:59:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:47.716830 | orchestrator | 2025-04-01 19:59:47 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:47.718371 | orchestrator | 2025-04-01 19:59:47 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 19:59:47.719822 | orchestrator | 2025-04-01 19:59:47 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:50.772509 | orchestrator | 2025-04-01 19:59:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:50.772684 | orchestrator | 2025-04-01 19:59:50 | INFO  | Task d4933ae1-073e-4a5e-857f-e255791efe70 is in state SUCCESS 2025-04-01 19:59:50.773644 | orchestrator | 2025-04-01 19:59:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:50.773679 | orchestrator | 2025-04-01 19:59:50 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:50.775694 | orchestrator | 2025-04-01 19:59:50 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 19:59:50.776173 | orchestrator | 2025-04-01 19:59:50 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 19:59:50.777044 | orchestrator | 2025-04-01 19:59:50 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:53.823366 | orchestrator | 2025-04-01 19:59:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:53.823501 | orchestrator | 2025-04-01 19:59:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:53.827905 | orchestrator | 2025-04-01 19:59:53 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:53.830544 | orchestrator | 2025-04-01 19:59:53 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 19:59:53.832949 | orchestrator | 2025-04-01 19:59:53 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 19:59:53.834980 | orchestrator | 2025-04-01 19:59:53 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:53.835234 | orchestrator | 2025-04-01 19:59:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:56.868634 | orchestrator | 2025-04-01 19:59:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:56.870927 | orchestrator | 2025-04-01 19:59:56 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:56.871845 | orchestrator | 2025-04-01 19:59:56 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 19:59:56.874630 | orchestrator | 2025-04-01 19:59:56 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 19:59:56.875388 | orchestrator | 2025-04-01 19:59:56 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:59.925603 | orchestrator | 2025-04-01 19:59:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 19:59:59.925794 | orchestrator | 2025-04-01 19:59:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 19:59:59.928537 | orchestrator | 2025-04-01 19:59:59 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 19:59:59.929095 | orchestrator | 2025-04-01 19:59:59 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 19:59:59.929807 | orchestrator | 2025-04-01 19:59:59 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 19:59:59.930483 | orchestrator | 2025-04-01 19:59:59 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 19:59:59.930589 | orchestrator | 2025-04-01 19:59:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:02.985555 | orchestrator | 2025-04-01 20:00:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:02.987243 | orchestrator | 2025-04-01 20:00:02 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:02.987287 | orchestrator | 2025-04-01 20:00:02 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:02.987311 | orchestrator | 2025-04-01 20:00:02 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:02.988275 | orchestrator | 2025-04-01 20:00:02 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:06.021569 | orchestrator | 2025-04-01 20:00:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:06.021776 | orchestrator | 2025-04-01 20:00:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:06.022307 | orchestrator | 2025-04-01 20:00:06 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:06.022363 | orchestrator | 2025-04-01 20:00:06 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:06.023085 | orchestrator | 2025-04-01 20:00:06 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:06.023939 | orchestrator | 2025-04-01 20:00:06 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:09.081826 | orchestrator | 2025-04-01 20:00:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:09.082083 | orchestrator | 2025-04-01 20:00:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:09.082689 | orchestrator | 2025-04-01 20:00:09 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:09.082769 | orchestrator | 2025-04-01 20:00:09 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:09.083694 | orchestrator | 2025-04-01 20:00:09 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:09.084385 | orchestrator | 2025-04-01 20:00:09 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:12.116581 | orchestrator | 2025-04-01 20:00:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:12.116773 | orchestrator | 2025-04-01 20:00:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:12.117540 | orchestrator | 2025-04-01 20:00:12 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:12.117577 | orchestrator | 2025-04-01 20:00:12 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:12.118229 | orchestrator | 2025-04-01 20:00:12 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:12.118710 | orchestrator | 2025-04-01 20:00:12 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:12.118860 | orchestrator | 2025-04-01 20:00:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:15.195947 | orchestrator | 2025-04-01 20:00:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:15.197885 | orchestrator | 2025-04-01 20:00:15 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:15.197931 | orchestrator | 2025-04-01 20:00:15 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:15.198336 | orchestrator | 2025-04-01 20:00:15 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:15.198445 | orchestrator | 2025-04-01 20:00:15 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:18.249831 | orchestrator | 2025-04-01 20:00:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:18.249971 | orchestrator | 2025-04-01 20:00:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:18.250531 | orchestrator | 2025-04-01 20:00:18 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:18.250570 | orchestrator | 2025-04-01 20:00:18 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:18.251303 | orchestrator | 2025-04-01 20:00:18 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:18.252133 | orchestrator | 2025-04-01 20:00:18 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:21.294903 | orchestrator | 2025-04-01 20:00:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:21.295050 | orchestrator | 2025-04-01 20:00:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:21.297365 | orchestrator | 2025-04-01 20:00:21 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:21.298534 | orchestrator | 2025-04-01 20:00:21 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:21.300292 | orchestrator | 2025-04-01 20:00:21 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:21.302168 | orchestrator | 2025-04-01 20:00:21 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:21.302203 | orchestrator | 2025-04-01 20:00:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:24.345526 | orchestrator | 2025-04-01 20:00:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:24.346346 | orchestrator | 2025-04-01 20:00:24 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:24.347142 | orchestrator | 2025-04-01 20:00:24 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:24.350618 | orchestrator | 2025-04-01 20:00:24 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:24.351717 | orchestrator | 2025-04-01 20:00:24 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:24.351966 | orchestrator | 2025-04-01 20:00:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:27.396810 | orchestrator | 2025-04-01 20:00:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:27.401310 | orchestrator | 2025-04-01 20:00:27 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:27.401880 | orchestrator | 2025-04-01 20:00:27 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:27.403483 | orchestrator | 2025-04-01 20:00:27 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:27.408085 | orchestrator | 2025-04-01 20:00:27 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:30.463355 | orchestrator | 2025-04-01 20:00:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:30.463478 | orchestrator | 2025-04-01 20:00:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:30.466531 | orchestrator | 2025-04-01 20:00:30 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:30.467252 | orchestrator | 2025-04-01 20:00:30 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:30.472335 | orchestrator | 2025-04-01 20:00:30 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:30.476959 | orchestrator | 2025-04-01 20:00:30 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:30.477968 | orchestrator | 2025-04-01 20:00:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:33.527992 | orchestrator | 2025-04-01 20:00:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:33.530564 | orchestrator | 2025-04-01 20:00:33 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:33.533944 | orchestrator | 2025-04-01 20:00:33 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:33.533979 | orchestrator | 2025-04-01 20:00:33 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:36.583226 | orchestrator | 2025-04-01 20:00:33 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:36.583341 | orchestrator | 2025-04-01 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:36.583375 | orchestrator | 2025-04-01 20:00:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:36.583651 | orchestrator | 2025-04-01 20:00:36 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:36.586149 | orchestrator | 2025-04-01 20:00:36 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:36.586936 | orchestrator | 2025-04-01 20:00:36 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:36.588037 | orchestrator | 2025-04-01 20:00:36 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:39.629702 | orchestrator | 2025-04-01 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:39.629841 | orchestrator | 2025-04-01 20:00:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:39.630192 | orchestrator | 2025-04-01 20:00:39 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:39.632077 | orchestrator | 2025-04-01 20:00:39 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:39.632788 | orchestrator | 2025-04-01 20:00:39 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:39.633592 | orchestrator | 2025-04-01 20:00:39 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:42.674162 | orchestrator | 2025-04-01 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:42.674309 | orchestrator | 2025-04-01 20:00:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:42.676132 | orchestrator | 2025-04-01 20:00:42 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:42.676230 | orchestrator | 2025-04-01 20:00:42 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:42.677405 | orchestrator | 2025-04-01 20:00:42 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:45.713394 | orchestrator | 2025-04-01 20:00:42 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:45.713504 | orchestrator | 2025-04-01 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:45.713539 | orchestrator | 2025-04-01 20:00:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:45.713678 | orchestrator | 2025-04-01 20:00:45 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:45.714377 | orchestrator | 2025-04-01 20:00:45 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:45.714938 | orchestrator | 2025-04-01 20:00:45 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:45.715675 | orchestrator | 2025-04-01 20:00:45 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:45.718440 | orchestrator | 2025-04-01 20:00:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:48.775807 | orchestrator | 2025-04-01 20:00:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:48.776710 | orchestrator | 2025-04-01 20:00:48 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:48.777496 | orchestrator | 2025-04-01 20:00:48 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:48.779793 | orchestrator | 2025-04-01 20:00:48 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:48.782252 | orchestrator | 2025-04-01 20:00:48 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:48.782436 | orchestrator | 2025-04-01 20:00:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:51.834545 | orchestrator | 2025-04-01 20:00:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:51.834788 | orchestrator | 2025-04-01 20:00:51 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:51.835277 | orchestrator | 2025-04-01 20:00:51 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:51.835954 | orchestrator | 2025-04-01 20:00:51 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:51.836720 | orchestrator | 2025-04-01 20:00:51 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:54.873026 | orchestrator | 2025-04-01 20:00:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:54.873144 | orchestrator | 2025-04-01 20:00:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:54.873355 | orchestrator | 2025-04-01 20:00:54 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:54.874206 | orchestrator | 2025-04-01 20:00:54 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:54.874906 | orchestrator | 2025-04-01 20:00:54 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:54.875836 | orchestrator | 2025-04-01 20:00:54 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:57.912883 | orchestrator | 2025-04-01 20:00:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:00:57.912993 | orchestrator | 2025-04-01 20:00:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:00:57.914371 | orchestrator | 2025-04-01 20:00:57 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:00:57.915390 | orchestrator | 2025-04-01 20:00:57 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:00:57.916589 | orchestrator | 2025-04-01 20:00:57 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:00:57.917297 | orchestrator | 2025-04-01 20:00:57 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:00:57.917449 | orchestrator | 2025-04-01 20:00:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:00.971200 | orchestrator | 2025-04-01 20:01:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:00.971898 | orchestrator | 2025-04-01 20:01:00 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:00.973200 | orchestrator | 2025-04-01 20:01:00 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:00.973929 | orchestrator | 2025-04-01 20:01:00 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:00.977898 | orchestrator | 2025-04-01 20:01:00 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:04.024132 | orchestrator | 2025-04-01 20:01:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:04.024255 | orchestrator | 2025-04-01 20:01:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:04.025815 | orchestrator | 2025-04-01 20:01:04 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:04.027520 | orchestrator | 2025-04-01 20:01:04 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:04.028718 | orchestrator | 2025-04-01 20:01:04 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:04.030215 | orchestrator | 2025-04-01 20:01:04 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:04.031041 | orchestrator | 2025-04-01 20:01:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:07.078248 | orchestrator | 2025-04-01 20:01:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:07.078665 | orchestrator | 2025-04-01 20:01:07 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:07.079841 | orchestrator | 2025-04-01 20:01:07 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:07.080781 | orchestrator | 2025-04-01 20:01:07 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:07.081985 | orchestrator | 2025-04-01 20:01:07 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:10.129280 | orchestrator | 2025-04-01 20:01:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:10.129415 | orchestrator | 2025-04-01 20:01:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:10.130412 | orchestrator | 2025-04-01 20:01:10 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:10.132332 | orchestrator | 2025-04-01 20:01:10 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:10.133319 | orchestrator | 2025-04-01 20:01:10 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:10.134517 | orchestrator | 2025-04-01 20:01:10 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:10.135234 | orchestrator | 2025-04-01 20:01:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:13.186299 | orchestrator | 2025-04-01 20:01:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:13.187006 | orchestrator | 2025-04-01 20:01:13 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:13.188413 | orchestrator | 2025-04-01 20:01:13 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:13.189454 | orchestrator | 2025-04-01 20:01:13 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:13.190484 | orchestrator | 2025-04-01 20:01:13 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:16.231587 | orchestrator | 2025-04-01 20:01:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:16.231709 | orchestrator | 2025-04-01 20:01:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:16.235419 | orchestrator | 2025-04-01 20:01:16 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:16.236031 | orchestrator | 2025-04-01 20:01:16 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:16.241037 | orchestrator | 2025-04-01 20:01:16 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:16.242109 | orchestrator | 2025-04-01 20:01:16 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:16.242346 | orchestrator | 2025-04-01 20:01:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:19.280110 | orchestrator | 2025-04-01 20:01:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:19.280770 | orchestrator | 2025-04-01 20:01:19 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:19.281547 | orchestrator | 2025-04-01 20:01:19 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:19.282429 | orchestrator | 2025-04-01 20:01:19 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:19.283521 | orchestrator | 2025-04-01 20:01:19 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:22.328904 | orchestrator | 2025-04-01 20:01:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:22.329067 | orchestrator | 2025-04-01 20:01:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:22.330608 | orchestrator | 2025-04-01 20:01:22 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:22.331802 | orchestrator | 2025-04-01 20:01:22 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:22.332699 | orchestrator | 2025-04-01 20:01:22 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:22.333951 | orchestrator | 2025-04-01 20:01:22 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:22.335087 | orchestrator | 2025-04-01 20:01:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:25.397653 | orchestrator | 2025-04-01 20:01:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:25.398261 | orchestrator | 2025-04-01 20:01:25 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:25.404458 | orchestrator | 2025-04-01 20:01:25 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:25.405814 | orchestrator | 2025-04-01 20:01:25 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:25.407027 | orchestrator | 2025-04-01 20:01:25 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:25.407111 | orchestrator | 2025-04-01 20:01:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:28.441019 | orchestrator | 2025-04-01 20:01:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:28.442823 | orchestrator | 2025-04-01 20:01:28 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:28.444485 | orchestrator | 2025-04-01 20:01:28 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:28.446520 | orchestrator | 2025-04-01 20:01:28 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:28.448540 | orchestrator | 2025-04-01 20:01:28 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:31.490955 | orchestrator | 2025-04-01 20:01:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:31.491123 | orchestrator | 2025-04-01 20:01:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:31.493235 | orchestrator | 2025-04-01 20:01:31 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:31.493265 | orchestrator | 2025-04-01 20:01:31 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:31.494442 | orchestrator | 2025-04-01 20:01:31 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:31.495084 | orchestrator | 2025-04-01 20:01:31 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:31.495316 | orchestrator | 2025-04-01 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:34.529161 | orchestrator | 2025-04-01 20:01:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:34.529516 | orchestrator | 2025-04-01 20:01:34 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:34.530591 | orchestrator | 2025-04-01 20:01:34 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:34.531499 | orchestrator | 2025-04-01 20:01:34 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:34.532309 | orchestrator | 2025-04-01 20:01:34 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:34.532655 | orchestrator | 2025-04-01 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:37.567431 | orchestrator | 2025-04-01 20:01:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:37.567907 | orchestrator | 2025-04-01 20:01:37 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:37.568905 | orchestrator | 2025-04-01 20:01:37 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:37.569897 | orchestrator | 2025-04-01 20:01:37 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:37.571242 | orchestrator | 2025-04-01 20:01:37 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:40.618384 | orchestrator | 2025-04-01 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:40.618519 | orchestrator | 2025-04-01 20:01:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:40.619034 | orchestrator | 2025-04-01 20:01:40 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:40.620109 | orchestrator | 2025-04-01 20:01:40 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:40.622973 | orchestrator | 2025-04-01 20:01:40 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:40.623491 | orchestrator | 2025-04-01 20:01:40 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:40.623765 | orchestrator | 2025-04-01 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:43.659982 | orchestrator | 2025-04-01 20:01:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:43.661285 | orchestrator | 2025-04-01 20:01:43 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:43.662136 | orchestrator | 2025-04-01 20:01:43 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:43.663201 | orchestrator | 2025-04-01 20:01:43 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:43.664229 | orchestrator | 2025-04-01 20:01:43 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:43.664311 | orchestrator | 2025-04-01 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:46.701930 | orchestrator | 2025-04-01 20:01:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:46.707732 | orchestrator | 2025-04-01 20:01:46 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:46.709272 | orchestrator | 2025-04-01 20:01:46 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:46.712529 | orchestrator | 2025-04-01 20:01:46 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:46.714070 | orchestrator | 2025-04-01 20:01:46 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:46.714417 | orchestrator | 2025-04-01 20:01:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:49.763847 | orchestrator | 2025-04-01 20:01:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:49.764523 | orchestrator | 2025-04-01 20:01:49 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:49.768242 | orchestrator | 2025-04-01 20:01:49 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:49.769122 | orchestrator | 2025-04-01 20:01:49 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:49.770504 | orchestrator | 2025-04-01 20:01:49 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:52.823537 | orchestrator | 2025-04-01 20:01:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:52.823671 | orchestrator | 2025-04-01 20:01:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:52.827397 | orchestrator | 2025-04-01 20:01:52 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:52.834081 | orchestrator | 2025-04-01 20:01:52 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:52.838293 | orchestrator | 2025-04-01 20:01:52 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:52.842957 | orchestrator | 2025-04-01 20:01:52 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:55.913814 | orchestrator | 2025-04-01 20:01:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:55.913945 | orchestrator | 2025-04-01 20:01:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:55.917634 | orchestrator | 2025-04-01 20:01:55 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:55.918412 | orchestrator | 2025-04-01 20:01:55 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:55.920661 | orchestrator | 2025-04-01 20:01:55 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:55.923537 | orchestrator | 2025-04-01 20:01:55 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:01:58.970201 | orchestrator | 2025-04-01 20:01:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:01:58.970334 | orchestrator | 2025-04-01 20:01:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:01:58.973357 | orchestrator | 2025-04-01 20:01:58 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:01:58.973951 | orchestrator | 2025-04-01 20:01:58 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:01:58.974706 | orchestrator | 2025-04-01 20:01:58 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:01:58.976540 | orchestrator | 2025-04-01 20:01:58 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:02.032910 | orchestrator | 2025-04-01 20:01:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:02.033051 | orchestrator | 2025-04-01 20:02:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:02.034123 | orchestrator | 2025-04-01 20:02:02 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:02.034166 | orchestrator | 2025-04-01 20:02:02 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:02.034839 | orchestrator | 2025-04-01 20:02:02 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:02.035404 | orchestrator | 2025-04-01 20:02:02 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:05.074967 | orchestrator | 2025-04-01 20:02:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:05.075122 | orchestrator | 2025-04-01 20:02:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:05.076121 | orchestrator | 2025-04-01 20:02:05 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:05.077006 | orchestrator | 2025-04-01 20:02:05 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:05.078164 | orchestrator | 2025-04-01 20:02:05 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:05.079109 | orchestrator | 2025-04-01 20:02:05 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:08.119636 | orchestrator | 2025-04-01 20:02:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:08.119818 | orchestrator | 2025-04-01 20:02:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:08.120467 | orchestrator | 2025-04-01 20:02:08 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:08.121925 | orchestrator | 2025-04-01 20:02:08 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:08.123196 | orchestrator | 2025-04-01 20:02:08 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:08.124230 | orchestrator | 2025-04-01 20:02:08 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:11.160117 | orchestrator | 2025-04-01 20:02:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:11.160255 | orchestrator | 2025-04-01 20:02:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:11.162732 | orchestrator | 2025-04-01 20:02:11 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:11.162829 | orchestrator | 2025-04-01 20:02:11 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:11.162944 | orchestrator | 2025-04-01 20:02:11 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:11.163914 | orchestrator | 2025-04-01 20:02:11 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:14.204122 | orchestrator | 2025-04-01 20:02:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:14.204256 | orchestrator | 2025-04-01 20:02:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:14.204472 | orchestrator | 2025-04-01 20:02:14 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:14.205819 | orchestrator | 2025-04-01 20:02:14 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:14.207241 | orchestrator | 2025-04-01 20:02:14 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:14.208080 | orchestrator | 2025-04-01 20:02:14 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:14.209478 | orchestrator | 2025-04-01 20:02:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:17.255431 | orchestrator | 2025-04-01 20:02:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:20.304864 | orchestrator | 2025-04-01 20:02:17 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:20.304990 | orchestrator | 2025-04-01 20:02:17 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:20.305009 | orchestrator | 2025-04-01 20:02:17 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:20.305025 | orchestrator | 2025-04-01 20:02:17 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:20.305082 | orchestrator | 2025-04-01 20:02:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:20.305123 | orchestrator | 2025-04-01 20:02:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:20.308067 | orchestrator | 2025-04-01 20:02:20 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:20.313575 | orchestrator | 2025-04-01 20:02:20 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:20.314428 | orchestrator | 2025-04-01 20:02:20 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:20.315963 | orchestrator | 2025-04-01 20:02:20 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:20.316095 | orchestrator | 2025-04-01 20:02:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:23.369814 | orchestrator | 2025-04-01 20:02:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:23.371203 | orchestrator | 2025-04-01 20:02:23 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:23.371374 | orchestrator | 2025-04-01 20:02:23 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:23.371511 | orchestrator | 2025-04-01 20:02:23 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:23.373118 | orchestrator | 2025-04-01 20:02:23 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:26.436023 | orchestrator | 2025-04-01 20:02:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:26.436149 | orchestrator | 2025-04-01 20:02:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:26.437185 | orchestrator | 2025-04-01 20:02:26 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:26.437216 | orchestrator | 2025-04-01 20:02:26 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:26.438847 | orchestrator | 2025-04-01 20:02:26 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:26.440435 | orchestrator | 2025-04-01 20:02:26 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:29.488238 | orchestrator | 2025-04-01 20:02:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:29.488352 | orchestrator | 2025-04-01 20:02:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:29.491950 | orchestrator | 2025-04-01 20:02:29 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:29.492974 | orchestrator | 2025-04-01 20:02:29 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:29.493005 | orchestrator | 2025-04-01 20:02:29 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:32.524029 | orchestrator | 2025-04-01 20:02:29 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:32.524147 | orchestrator | 2025-04-01 20:02:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:32.524182 | orchestrator | 2025-04-01 20:02:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:32.526846 | orchestrator | 2025-04-01 20:02:32 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:35.563465 | orchestrator | 2025-04-01 20:02:32 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:35.563612 | orchestrator | 2025-04-01 20:02:32 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:35.563633 | orchestrator | 2025-04-01 20:02:32 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:35.563649 | orchestrator | 2025-04-01 20:02:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:35.563699 | orchestrator | 2025-04-01 20:02:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:35.565218 | orchestrator | 2025-04-01 20:02:35 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:35.565469 | orchestrator | 2025-04-01 20:02:35 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:35.565500 | orchestrator | 2025-04-01 20:02:35 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:35.566365 | orchestrator | 2025-04-01 20:02:35 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:38.622420 | orchestrator | 2025-04-01 20:02:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:38.622551 | orchestrator | 2025-04-01 20:02:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:38.623192 | orchestrator | 2025-04-01 20:02:38 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:38.623225 | orchestrator | 2025-04-01 20:02:38 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:38.625317 | orchestrator | 2025-04-01 20:02:38 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:38.625585 | orchestrator | 2025-04-01 20:02:38 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:41.663325 | orchestrator | 2025-04-01 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:41.663463 | orchestrator | 2025-04-01 20:02:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:41.663966 | orchestrator | 2025-04-01 20:02:41 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:41.664698 | orchestrator | 2025-04-01 20:02:41 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:41.665876 | orchestrator | 2025-04-01 20:02:41 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:41.666979 | orchestrator | 2025-04-01 20:02:41 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:41.668611 | orchestrator | 2025-04-01 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:44.726501 | orchestrator | 2025-04-01 20:02:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:44.729925 | orchestrator | 2025-04-01 20:02:44 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:44.733252 | orchestrator | 2025-04-01 20:02:44 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:44.734973 | orchestrator | 2025-04-01 20:02:44 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:44.737538 | orchestrator | 2025-04-01 20:02:44 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:44.737736 | orchestrator | 2025-04-01 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:47.786186 | orchestrator | 2025-04-01 20:02:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:47.786902 | orchestrator | 2025-04-01 20:02:47 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state STARTED 2025-04-01 20:02:47.787731 | orchestrator | 2025-04-01 20:02:47 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:47.788745 | orchestrator | 2025-04-01 20:02:47 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:47.789582 | orchestrator | 2025-04-01 20:02:47 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:47.789915 | orchestrator | 2025-04-01 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:50.845529 | orchestrator | 2025-04-01 20:02:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:50.851389 | orchestrator | 2025-04-01 20:02:50 | INFO  | Task 8fb32944-94f1-4f9b-9f2f-cc07bc105f0f is in state SUCCESS 2025-04-01 20:02:50.853302 | orchestrator | 2025-04-01 20:02:50.853342 | orchestrator | 2025-04-01 20:02:50.853358 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-04-01 20:02:50.853373 | orchestrator | 2025-04-01 20:02:50.853388 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-04-01 20:02:50.853402 | orchestrator | Tuesday 01 April 2025 19:53:38 +0000 (0:00:00.234) 0:00:00.234 ********* 2025-04-01 20:02:50.853416 | orchestrator | changed: [localhost] 2025-04-01 20:02:50.853432 | orchestrator | 2025-04-01 20:02:50.853446 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-04-01 20:02:50.853461 | orchestrator | Tuesday 01 April 2025 19:53:39 +0000 (0:00:00.759) 0:00:00.993 ********* 2025-04-01 20:02:50.853474 | orchestrator | 2025-04-01 20:02:50.853489 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853503 | orchestrator | 2025-04-01 20:02:50.853517 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853531 | orchestrator | 2025-04-01 20:02:50.853545 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853559 | orchestrator | 2025-04-01 20:02:50.853573 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853587 | orchestrator | 2025-04-01 20:02:50.853601 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853615 | orchestrator | 2025-04-01 20:02:50.853629 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853643 | orchestrator | 2025-04-01 20:02:50.853657 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-01 20:02:50.853671 | orchestrator | changed: [localhost] 2025-04-01 20:02:50.853685 | orchestrator | 2025-04-01 20:02:50.853699 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-04-01 20:02:50.853714 | orchestrator | Tuesday 01 April 2025 19:59:28 +0000 (0:05:49.103) 0:05:50.096 ********* 2025-04-01 20:02:50.853727 | orchestrator | changed: [localhost] 2025-04-01 20:02:50.853742 | orchestrator | 2025-04-01 20:02:50.853783 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:02:50.853798 | orchestrator | 2025-04-01 20:02:50.853813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:02:50.853827 | orchestrator | Tuesday 01 April 2025 19:59:42 +0000 (0:00:13.680) 0:06:03.776 ********* 2025-04-01 20:02:50.853841 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:02:50.853855 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:02:50.853870 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:02:50.853884 | orchestrator | 2025-04-01 20:02:50.853899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:02:50.853934 | orchestrator | Tuesday 01 April 2025 19:59:42 +0000 (0:00:00.567) 0:06:04.344 ********* 2025-04-01 20:02:50.853952 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-04-01 20:02:50.853968 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-04-01 20:02:50.854010 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-04-01 20:02:50.854078 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-04-01 20:02:50.854095 | orchestrator | 2025-04-01 20:02:50.854112 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-04-01 20:02:50.854129 | orchestrator | skipping: no hosts matched 2025-04-01 20:02:50.854146 | orchestrator | 2025-04-01 20:02:50.854162 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:02:50.854178 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.854196 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.854220 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.854235 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.854249 | orchestrator | 2025-04-01 20:02:50.854263 | orchestrator | 2025-04-01 20:02:50.854278 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:02:50.854292 | orchestrator | Tuesday 01 April 2025 19:59:43 +0000 (0:00:00.518) 0:06:04.862 ********* 2025-04-01 20:02:50.854506 | orchestrator | =============================================================================== 2025-04-01 20:02:50.854525 | orchestrator | Download ironic-agent initramfs --------------------------------------- 349.10s 2025-04-01 20:02:50.854539 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.68s 2025-04-01 20:02:50.854553 | orchestrator | Ensure the destination directory exists --------------------------------- 0.76s 2025-04-01 20:02:50.854567 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-04-01 20:02:50.854581 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-04-01 20:02:50.854595 | orchestrator | 2025-04-01 20:02:50.854609 | orchestrator | 2025-04-01 20:02:50.854623 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:02:50.854638 | orchestrator | 2025-04-01 20:02:50.854652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:02:50.854666 | orchestrator | Tuesday 01 April 2025 19:59:10 +0000 (0:00:00.485) 0:00:00.485 ********* 2025-04-01 20:02:50.854680 | orchestrator | ok: [testbed-manager] 2025-04-01 20:02:50.854694 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:02:50.854708 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:02:50.854722 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:02:50.854736 | orchestrator | ok: [testbed-node-3] 2025-04-01 20:02:50.854750 | orchestrator | ok: [testbed-node-4] 2025-04-01 20:02:50.854799 | orchestrator | ok: [testbed-node-5] 2025-04-01 20:02:50.854814 | orchestrator | 2025-04-01 20:02:50.854828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:02:50.854843 | orchestrator | Tuesday 01 April 2025 19:59:11 +0000 (0:00:01.560) 0:00:02.045 ********* 2025-04-01 20:02:50.854870 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854885 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854905 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854920 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854934 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854948 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854962 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-04-01 20:02:50.854976 | orchestrator | 2025-04-01 20:02:50.854990 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-01 20:02:50.855014 | orchestrator | 2025-04-01 20:02:50.855028 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-04-01 20:02:50.855042 | orchestrator | Tuesday 01 April 2025 19:59:13 +0000 (0:00:01.919) 0:00:03.965 ********* 2025-04-01 20:02:50.855056 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:02:50.855072 | orchestrator | 2025-04-01 20:02:50.855143 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-04-01 20:02:50.855159 | orchestrator | Tuesday 01 April 2025 19:59:16 +0000 (0:00:02.164) 0:00:06.129 ********* 2025-04-01 20:02:50.855173 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-04-01 20:02:50.855187 | orchestrator | 2025-04-01 20:02:50.855201 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-04-01 20:02:50.855215 | orchestrator | Tuesday 01 April 2025 19:59:19 +0000 (0:00:03.747) 0:00:09.876 ********* 2025-04-01 20:02:50.855229 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-04-01 20:02:50.855243 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-04-01 20:02:50.855257 | orchestrator | 2025-04-01 20:02:50.855271 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-04-01 20:02:50.855285 | orchestrator | Tuesday 01 April 2025 19:59:26 +0000 (0:00:06.978) 0:00:16.855 ********* 2025-04-01 20:02:50.855299 | orchestrator | ok: [testbed-manager] => (item=service) 2025-04-01 20:02:50.855313 | orchestrator | 2025-04-01 20:02:50.855327 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-04-01 20:02:50.855341 | orchestrator | Tuesday 01 April 2025 19:59:30 +0000 (0:00:03.568) 0:00:20.423 ********* 2025-04-01 20:02:50.855356 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 20:02:50.855370 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-04-01 20:02:50.855384 | orchestrator | 2025-04-01 20:02:50.855398 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-04-01 20:02:50.855412 | orchestrator | Tuesday 01 April 2025 19:59:34 +0000 (0:00:04.148) 0:00:24.572 ********* 2025-04-01 20:02:50.855427 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-04-01 20:02:50.855441 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-04-01 20:02:50.855455 | orchestrator | 2025-04-01 20:02:50.855469 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-04-01 20:02:50.855488 | orchestrator | Tuesday 01 April 2025 19:59:41 +0000 (0:00:07.227) 0:00:31.800 ********* 2025-04-01 20:02:50.855503 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-04-01 20:02:50.855517 | orchestrator | 2025-04-01 20:02:50.855531 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:02:50.855545 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855565 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855580 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855594 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855608 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855623 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855644 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:02:50.855659 | orchestrator | 2025-04-01 20:02:50.855673 | orchestrator | 2025-04-01 20:02:50.855687 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:02:50.855701 | orchestrator | Tuesday 01 April 2025 19:59:48 +0000 (0:00:06.775) 0:00:38.575 ********* 2025-04-01 20:02:50.855715 | orchestrator | =============================================================================== 2025-04-01 20:02:50.855728 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.23s 2025-04-01 20:02:50.855742 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.98s 2025-04-01 20:02:50.855775 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.78s 2025-04-01 20:02:50.855983 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.15s 2025-04-01 20:02:50.856002 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.75s 2025-04-01 20:02:50.856016 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.57s 2025-04-01 20:02:50.856030 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.16s 2025-04-01 20:02:50.856045 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.92s 2025-04-01 20:02:50.856059 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.56s 2025-04-01 20:02:50.856073 | orchestrator | 2025-04-01 20:02:50.856086 | orchestrator | 2025-04-01 20:02:50.856100 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:02:50.856115 | orchestrator | 2025-04-01 20:02:50.856129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:02:50.856143 | orchestrator | Tuesday 01 April 2025 19:57:38 +0000 (0:00:00.345) 0:00:00.345 ********* 2025-04-01 20:02:50.856156 | orchestrator | ok: [testbed-manager] 2025-04-01 20:02:50.856203 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:02:50.856219 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:02:50.856233 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:02:50.856247 | orchestrator | ok: [testbed-node-3] 2025-04-01 20:02:50.856261 | orchestrator | ok: [testbed-node-4] 2025-04-01 20:02:50.856275 | orchestrator | ok: [testbed-node-5] 2025-04-01 20:02:50.856326 | orchestrator | 2025-04-01 20:02:50.856341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:02:50.856356 | orchestrator | Tuesday 01 April 2025 19:57:39 +0000 (0:00:00.979) 0:00:01.324 ********* 2025-04-01 20:02:50.856370 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856384 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856398 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856442 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856459 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856473 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856487 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-04-01 20:02:50.856502 | orchestrator | 2025-04-01 20:02:50.856516 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-04-01 20:02:50.856547 | orchestrator | 2025-04-01 20:02:50.856563 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-01 20:02:50.856577 | orchestrator | Tuesday 01 April 2025 19:57:40 +0000 (0:00:01.038) 0:00:02.362 ********* 2025-04-01 20:02:50.856591 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:02:50.856606 | orchestrator | 2025-04-01 20:02:50.856620 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-04-01 20:02:50.856634 | orchestrator | Tuesday 01 April 2025 19:57:42 +0000 (0:00:01.594) 0:00:03.956 ********* 2025-04-01 20:02:50.856659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.856710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.856736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.856795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 20:02:50.856813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.856828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.856852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.856876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.856905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.856922 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.856938 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.856953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.856975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.856990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.857015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.857031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.857055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.857071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.857086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.857113 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.857129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.857144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.857169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.857191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.857207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.857222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.857255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.857271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.857418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.857440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.859082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 20:02:50.859159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859175 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.859222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.859250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.859287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.859325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.859341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.859428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.859463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859495 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.859509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.859522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.859535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.859598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.859611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.859639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.859699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.859788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.859849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.859890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.859905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.859920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.859957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.859989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.860005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860018 | orchestrator | 2025-04-01 20:02:50.860032 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-01 20:02:50.860046 | orchestrator | Tuesday 01 April 2025 19:57:46 +0000 (0:00:03.773) 0:00:07.730 ********* 2025-04-01 20:02:50.860060 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:02:50.860074 | orchestrator | 2025-04-01 20:02:50.860087 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-04-01 20:02:50.860100 | orchestrator | Tuesday 01 April 2025 19:57:48 +0000 (0:00:01.863) 0:00:09.593 ********* 2025-04-01 20:02:50.860113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 20:02:50.860185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.860315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860370 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860478 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 20:02:50.860492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.860570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860585 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.860626 | orchestrator | 2025-04-01 20:02:50.860640 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-04-01 20:02:50.860653 | orchestrator | Tuesday 01 April 2025 19:57:55 +0000 (0:00:07.392) 0:00:16.986 ********* 2025-04-01 20:02:50.860666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.860679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.860747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.860791 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.860805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.860819 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.860851 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.860888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.860930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.860944 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.860958 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.860979 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.860993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861082 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.861095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861144 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.861157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861216 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.861229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861271 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.861289 | orchestrator | 2025-04-01 20:02:50.861303 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-04-01 20:02:50.861316 | orchestrator | Tuesday 01 April 2025 19:58:00 +0000 (0:00:04.974) 0:00:21.960 ********* 2025-04-01 20:02:50.861329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.861806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.861820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861867 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.861894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.861973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.861993 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862007 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.862057 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.862072 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.862085 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.862098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.862112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.862159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.862175 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.862188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.862201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.862286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.862305 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.862318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-01 20:02:50.862339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.862352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.862379 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.862397 | orchestrator | 2025-04-01 20:02:50.862410 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-04-01 20:02:50.862423 | orchestrator | Tuesday 01 April 2025 19:58:05 +0000 (0:00:04.529) 0:00:26.490 ********* 2025-04-01 20:02:50.862436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.862449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.862493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.862515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.862539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.862553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 20:02:50.862566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.862624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862868 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862881 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862907 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.862938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.862966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.863057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.863192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863272 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.863306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863461 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 20:02:50.863500 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863513 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.863616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.863659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.863709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.863722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.863733 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.863781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.863855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.863915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.863958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.863982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.863992 | orchestrator | 2025-04-01 20:02:50.864002 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-04-01 20:02:50.864013 | orchestrator | Tuesday 01 April 2025 19:58:12 +0000 (0:00:07.891) 0:00:34.381 ********* 2025-04-01 20:02:50.864023 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 20:02:50.864034 | orchestrator | 2025-04-01 20:02:50.864044 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-04-01 20:02:50.864054 | orchestrator | Tuesday 01 April 2025 19:58:13 +0000 (0:00:00.632) 0:00:35.013 ********* 2025-04-01 20:02:50.864074 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864094 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864105 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864138 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864151 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864162 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864172 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864252 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864264 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1063320, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.864274 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864310 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864323 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864342 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864353 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864370 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864380 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864391 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864425 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864445 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864457 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864473 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864494 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864505 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864558 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864570 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864586 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1063326, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.864597 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864608 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864619 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864660 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864673 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864684 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864701 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864722 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864733 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864781 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864805 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864817 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864834 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864846 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1063322, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.864857 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864869 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864911 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864926 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864945 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864957 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864969 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864980 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.864992 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865038 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865071 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865083 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865094 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865105 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865124 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865159 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865173 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1063325, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865190 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865202 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.865214 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865225 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.865236 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865248 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.865259 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865279 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865314 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865334 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.865345 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865357 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.865368 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-01 20:02:50.865379 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.865390 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1063338, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.898002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865402 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1063328, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865422 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1063324, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8900018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865434 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1063327, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8910017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865473 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1063336, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8970017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865487 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1063323, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8890016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1063330, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8930018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-01 20:02:50.865510 | orchestrator | 2025-04-01 20:02:50.865521 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-04-01 20:02:50.865532 | orchestrator | Tuesday 01 April 2025 19:59:03 +0000 (0:00:49.759) 0:01:24.772 ********* 2025-04-01 20:02:50.865542 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 20:02:50.865553 | orchestrator | 2025-04-01 20:02:50.865564 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-04-01 20:02:50.865574 | orchestrator | Tuesday 01 April 2025 19:59:03 +0000 (0:00:00.488) 0:01:25.261 ********* 2025-04-01 20:02:50.865585 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.865595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865606 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.865617 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865628 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.865639 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 20:02:50.865650 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.865660 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865671 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.865682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865693 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.865704 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 20:02:50.865714 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.865725 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865736 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.865747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865802 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.865813 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.865824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865840 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.865851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865861 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.865871 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.865882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865892 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.865903 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865913 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.865923 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.865933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865944 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.865954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.865964 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.865975 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.866012 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.866051 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-04-01 20:02:50.866062 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-01 20:02:50.866073 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-04-01 20:02:50.866083 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-01 20:02:50.866093 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-01 20:02:50.866103 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-01 20:02:50.866114 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-01 20:02:50.866124 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-01 20:02:50.866134 | orchestrator | 2025-04-01 20:02:50.866145 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-04-01 20:02:50.866155 | orchestrator | Tuesday 01 April 2025 19:59:05 +0000 (0:00:01.549) 0:01:26.811 ********* 2025-04-01 20:02:50.866165 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-01 20:02:50.866176 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.866186 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-01 20:02:50.866196 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.866206 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-01 20:02:50.866217 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.866227 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-01 20:02:50.866237 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.866248 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-01 20:02:50.866258 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.866268 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-01 20:02:50.866278 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.866289 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-04-01 20:02:50.866299 | orchestrator | 2025-04-01 20:02:50.866309 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-04-01 20:02:50.866319 | orchestrator | Tuesday 01 April 2025 19:59:26 +0000 (0:00:21.330) 0:01:48.141 ********* 2025-04-01 20:02:50.866330 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-01 20:02:50.866338 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.866352 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-01 20:02:50.866361 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.866370 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-01 20:02:50.866378 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.866387 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-01 20:02:50.866395 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.866404 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-01 20:02:50.866413 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.866421 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-01 20:02:50.866430 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.866439 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-04-01 20:02:50.866447 | orchestrator | 2025-04-01 20:02:50.866456 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-04-01 20:02:50.866465 | orchestrator | Tuesday 01 April 2025 19:59:32 +0000 (0:00:06.262) 0:01:54.404 ********* 2025-04-01 20:02:50.866473 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-01 20:02:50.866482 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.866491 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-01 20:02:50.866500 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.866509 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-01 20:02:50.866517 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.866526 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-01 20:02:50.866535 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.866544 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-01 20:02:50.866552 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.866561 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-01 20:02:50.866570 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.866582 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-04-01 20:02:50.866591 | orchestrator | 2025-04-01 20:02:50.866600 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-04-01 20:02:50.866609 | orchestrator | Tuesday 01 April 2025 19:59:37 +0000 (0:00:04.503) 0:01:58.907 ********* 2025-04-01 20:02:50.866617 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 20:02:50.866626 | orchestrator | 2025-04-01 20:02:50.866639 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-04-01 20:02:50.866647 | orchestrator | Tuesday 01 April 2025 19:59:38 +0000 (0:00:00.645) 0:01:59.552 ********* 2025-04-01 20:02:50.866656 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.866665 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.866673 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.866682 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.866691 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.866699 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.866708 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.866721 | orchestrator | 2025-04-01 20:02:50.866730 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-04-01 20:02:50.866738 | orchestrator | Tuesday 01 April 2025 19:59:39 +0000 (0:00:00.905) 0:02:00.458 ********* 2025-04-01 20:02:50.866747 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.866769 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.866778 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.866787 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.866795 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:02:50.866804 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:02:50.866812 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:02:50.866821 | orchestrator | 2025-04-01 20:02:50.866830 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-04-01 20:02:50.866839 | orchestrator | Tuesday 01 April 2025 19:59:44 +0000 (0:00:05.084) 0:02:05.542 ********* 2025-04-01 20:02:50.866847 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866856 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.866865 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866874 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.866882 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866891 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.866903 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866913 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.866923 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866934 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.866943 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866952 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.866961 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-01 20:02:50.866970 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.866978 | orchestrator | 2025-04-01 20:02:50.866987 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-04-01 20:02:50.866996 | orchestrator | Tuesday 01 April 2025 19:59:48 +0000 (0:00:04.432) 0:02:09.975 ********* 2025-04-01 20:02:50.867005 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-01 20:02:50.867013 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.867022 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-01 20:02:50.867031 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.867039 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-01 20:02:50.867048 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.867057 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-01 20:02:50.867065 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.867074 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-01 20:02:50.867083 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.867092 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-01 20:02:50.867100 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.867109 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-04-01 20:02:50.867118 | orchestrator | 2025-04-01 20:02:50.867126 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-04-01 20:02:50.867140 | orchestrator | Tuesday 01 April 2025 19:59:53 +0000 (0:00:04.928) 0:02:14.903 ********* 2025-04-01 20:02:50.867148 | orchestrator | [WARNING]: Skipped 2025-04-01 20:02:50.867157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-04-01 20:02:50.867166 | orchestrator | due to this access issue: 2025-04-01 20:02:50.867174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-04-01 20:02:50.867183 | orchestrator | not a directory 2025-04-01 20:02:50.867192 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-01 20:02:50.867200 | orchestrator | 2025-04-01 20:02:50.867209 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-04-01 20:02:50.867222 | orchestrator | Tuesday 01 April 2025 19:59:56 +0000 (0:00:02.914) 0:02:17.818 ********* 2025-04-01 20:02:50.867231 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.867239 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.867248 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.867256 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.867265 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.867273 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.867282 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.867291 | orchestrator | 2025-04-01 20:02:50.867299 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-04-01 20:02:50.867308 | orchestrator | Tuesday 01 April 2025 19:59:57 +0000 (0:00:01.441) 0:02:19.259 ********* 2025-04-01 20:02:50.867317 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.867325 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.867334 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.867342 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.867351 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.867359 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.867368 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.867376 | orchestrator | 2025-04-01 20:02:50.867385 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-04-01 20:02:50.867394 | orchestrator | Tuesday 01 April 2025 19:59:59 +0000 (0:00:01.293) 0:02:20.552 ********* 2025-04-01 20:02:50.867403 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867411 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.867420 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867429 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.867437 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867446 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.867454 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867463 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.867472 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867481 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.867489 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867498 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.867507 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-01 20:02:50.867516 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.867525 | orchestrator | 2025-04-01 20:02:50.867533 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-04-01 20:02:50.867542 | orchestrator | Tuesday 01 April 2025 20:00:03 +0000 (0:00:04.286) 0:02:24.838 ********* 2025-04-01 20:02:50.867551 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867564 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:02:50.867573 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867581 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:02:50.867590 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867599 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:02:50.867607 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867616 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:02:50.867624 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867633 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:02:50.867642 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867650 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:02:50.867659 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-01 20:02:50.867667 | orchestrator | skipping: [testbed-manager] 2025-04-01 20:02:50.867676 | orchestrator | 2025-04-01 20:02:50.867685 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-04-01 20:02:50.867693 | orchestrator | Tuesday 01 April 2025 20:00:08 +0000 (0:00:04.741) 0:02:29.580 ********* 2025-04-01 20:02:50.867703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.867728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.867739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.867765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.867775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.867785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.867799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.867816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-01 20:02:50.867826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.867839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.867849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.867858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.867867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-01 20:02:50.867880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.867890 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.867918 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.867931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.867940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.867949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.867958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.867967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.867987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.867997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.868011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-01 20:02:50.868068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868091 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.868128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.868243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868279 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-01 20:02:50.868292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868306 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.868382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.868392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-01 20:02:50.868444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-01 20:02:50.868453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-01 20:02:50.868462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868471 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868480 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.868493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.868559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.868592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-01 20:02:50.868611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-01 20:02:50.868629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-01 20:02:50.868637 | orchestrator | 2025-04-01 20:02:50.868646 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-04-01 20:02:50.868655 | orchestrator | Tuesday 01 April 2025 20:00:15 +0000 (0:00:06.966) 0:02:36.547 ********* 2025-04-01 20:02:50.868664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-01 20:02:50.868673 | orchestrator | 2025-04-01 20:02:50.868681 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868690 | orchestrator | Tuesday 01 April 2025 20:00:18 +0000 (0:00:03.529) 0:02:40.076 ********* 2025-04-01 20:02:50.868699 | orchestrator | 2025-04-01 20:02:50.868707 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868716 | orchestrator | Tuesday 01 April 2025 20:00:18 +0000 (0:00:00.071) 0:02:40.148 ********* 2025-04-01 20:02:50.868725 | orchestrator | 2025-04-01 20:02:50.868733 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868742 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.280) 0:02:40.428 ********* 2025-04-01 20:02:50.868767 | orchestrator | 2025-04-01 20:02:50.868776 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868785 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.064) 0:02:40.492 ********* 2025-04-01 20:02:50.868794 | orchestrator | 2025-04-01 20:02:50.868803 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868811 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.064) 0:02:40.557 ********* 2025-04-01 20:02:50.868820 | orchestrator | 2025-04-01 20:02:50.868828 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868837 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.056) 0:02:40.613 ********* 2025-04-01 20:02:50.868846 | orchestrator | 2025-04-01 20:02:50.868854 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-01 20:02:50.868863 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.279) 0:02:40.893 ********* 2025-04-01 20:02:50.868872 | orchestrator | 2025-04-01 20:02:50.868880 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-04-01 20:02:50.868889 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.077) 0:02:40.970 ********* 2025-04-01 20:02:50.868897 | orchestrator | changed: [testbed-manager] 2025-04-01 20:02:50.868906 | orchestrator | 2025-04-01 20:02:50.868918 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-04-01 20:02:50.868927 | orchestrator | Tuesday 01 April 2025 20:00:38 +0000 (0:00:19.154) 0:03:00.125 ********* 2025-04-01 20:02:50.868936 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:02:50.868945 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:02:50.868953 | orchestrator | changed: [testbed-manager] 2025-04-01 20:02:50.868962 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:02:50.868971 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:02:50.868979 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:02:50.868988 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:02:50.868997 | orchestrator | 2025-04-01 20:02:50.869005 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-04-01 20:02:50.869014 | orchestrator | Tuesday 01 April 2025 20:01:05 +0000 (0:00:26.413) 0:03:26.540 ********* 2025-04-01 20:02:50.869023 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:02:50.869032 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:02:50.869040 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:02:50.869052 | orchestrator | 2025-04-01 20:02:50.869061 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-04-01 20:02:50.869070 | orchestrator | Tuesday 01 April 2025 20:01:18 +0000 (0:00:13.096) 0:03:39.637 ********* 2025-04-01 20:02:50.869078 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:02:50.869087 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:02:50.869096 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:02:50.869104 | orchestrator | 2025-04-01 20:02:50.869113 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-04-01 20:02:50.869121 | orchestrator | Tuesday 01 April 2025 20:01:34 +0000 (0:00:16.763) 0:03:56.400 ********* 2025-04-01 20:02:50.869130 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:02:50.869139 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:02:50.869147 | orchestrator | changed: [testbed-manager] 2025-04-01 20:02:50.869156 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:02:50.869165 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:02:50.869173 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:02:50.869182 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:02:50.869190 | orchestrator | 2025-04-01 20:02:50.869199 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-04-01 20:02:50.869207 | orchestrator | Tuesday 01 April 2025 20:02:01 +0000 (0:00:26.499) 0:04:22.900 ********* 2025-04-01 20:02:50.869216 | orchestrator | changed: [testbed-manager] 2025-04-01 20:02:50.869225 | orchestrator | 2025-04-01 20:02:50.869233 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-04-01 20:02:50.869247 | orchestrator | Tuesday 01 April 2025 20:02:12 +0000 (0:00:11.060) 0:04:33.960 ********* 2025-04-01 20:02:50.869255 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:02:50.869264 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:02:50.869273 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:02:50.869281 | orchestrator | 2025-04-01 20:02:50.869290 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-04-01 20:02:50.869299 | orchestrator | Tuesday 01 April 2025 20:02:27 +0000 (0:00:14.998) 0:04:48.959 ********* 2025-04-01 20:02:50.869307 | orchestrator | changed: [testbed-manager] 2025-04-01 20:02:50.869316 | orchestrator | 2025-04-01 20:02:50.869324 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-04-01 20:02:50.869333 | orchestrator | Tuesday 01 April 2025 20:02:36 +0000 (0:00:09.389) 0:04:58.349 ********* 2025-04-01 20:02:50.869341 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:02:50.869350 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:02:50.869359 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:02:50.869367 | orchestrator | 2025-04-01 20:02:50.869376 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:02:50.869385 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-01 20:02:50.869394 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-01 20:02:50.869403 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-01 20:02:50.869411 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-01 20:02:50.869420 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-01 20:02:50.869429 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-01 20:02:50.869438 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-01 20:02:50.869447 | orchestrator | 2025-04-01 20:02:50.869455 | orchestrator | 2025-04-01 20:02:50.869464 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:02:50.869472 | orchestrator | Tuesday 01 April 2025 20:02:49 +0000 (0:00:12.744) 0:05:11.093 ********* 2025-04-01 20:02:50.869481 | orchestrator | =============================================================================== 2025-04-01 20:02:50.869490 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 49.76s 2025-04-01 20:02:50.869498 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 26.50s 2025-04-01 20:02:50.869507 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 26.41s 2025-04-01 20:02:50.869519 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 21.33s 2025-04-01 20:02:50.869531 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.15s 2025-04-01 20:02:53.908177 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 16.76s 2025-04-01 20:02:53.908287 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 15.00s 2025-04-01 20:02:53.908305 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.10s 2025-04-01 20:02:53.908320 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.74s 2025-04-01 20:02:53.908334 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.06s 2025-04-01 20:02:53.908375 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.39s 2025-04-01 20:02:53.908390 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.89s 2025-04-01 20:02:53.908405 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.39s 2025-04-01 20:02:53.908419 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.97s 2025-04-01 20:02:53.908433 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 6.26s 2025-04-01 20:02:53.908447 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 5.08s 2025-04-01 20:02:53.908461 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 4.97s 2025-04-01 20:02:53.908476 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.93s 2025-04-01 20:02:53.908490 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 4.74s 2025-04-01 20:02:53.908504 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.53s 2025-04-01 20:02:53.908518 | orchestrator | 2025-04-01 20:02:50 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:53.908533 | orchestrator | 2025-04-01 20:02:50 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:53.908547 | orchestrator | 2025-04-01 20:02:50 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:53.908562 | orchestrator | 2025-04-01 20:02:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:53.908592 | orchestrator | 2025-04-01 20:02:53 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:02:53.909321 | orchestrator | 2025-04-01 20:02:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:53.912454 | orchestrator | 2025-04-01 20:02:53 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:53.912912 | orchestrator | 2025-04-01 20:02:53 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:53.913815 | orchestrator | 2025-04-01 20:02:53 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:02:56.954185 | orchestrator | 2025-04-01 20:02:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:02:56.954319 | orchestrator | 2025-04-01 20:02:56 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:02:56.956077 | orchestrator | 2025-04-01 20:02:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:02:56.958010 | orchestrator | 2025-04-01 20:02:56 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:02:56.959972 | orchestrator | 2025-04-01 20:02:56 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:02:56.962809 | orchestrator | 2025-04-01 20:02:56 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:00.031295 | orchestrator | 2025-04-01 20:02:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:00.031470 | orchestrator | 2025-04-01 20:03:00 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:00.040607 | orchestrator | 2025-04-01 20:03:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:00.040807 | orchestrator | 2025-04-01 20:03:00 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:00.042324 | orchestrator | 2025-04-01 20:03:00 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:00.042876 | orchestrator | 2025-04-01 20:03:00 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:03.100910 | orchestrator | 2025-04-01 20:03:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:03.101092 | orchestrator | 2025-04-01 20:03:03 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:03.101899 | orchestrator | 2025-04-01 20:03:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:03.103499 | orchestrator | 2025-04-01 20:03:03 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:03.105065 | orchestrator | 2025-04-01 20:03:03 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:03.106061 | orchestrator | 2025-04-01 20:03:03 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:06.158471 | orchestrator | 2025-04-01 20:03:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:06.158640 | orchestrator | 2025-04-01 20:03:06 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:06.160746 | orchestrator | 2025-04-01 20:03:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:06.162818 | orchestrator | 2025-04-01 20:03:06 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:06.164831 | orchestrator | 2025-04-01 20:03:06 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:06.166641 | orchestrator | 2025-04-01 20:03:06 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:09.213427 | orchestrator | 2025-04-01 20:03:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:09.213594 | orchestrator | 2025-04-01 20:03:09 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:09.214308 | orchestrator | 2025-04-01 20:03:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:09.216597 | orchestrator | 2025-04-01 20:03:09 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:09.217914 | orchestrator | 2025-04-01 20:03:09 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:09.219108 | orchestrator | 2025-04-01 20:03:09 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:12.262296 | orchestrator | 2025-04-01 20:03:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:12.262431 | orchestrator | 2025-04-01 20:03:12 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:12.266977 | orchestrator | 2025-04-01 20:03:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:12.267604 | orchestrator | 2025-04-01 20:03:12 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:12.267631 | orchestrator | 2025-04-01 20:03:12 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:12.267646 | orchestrator | 2025-04-01 20:03:12 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:12.267665 | orchestrator | 2025-04-01 20:03:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:15.315136 | orchestrator | 2025-04-01 20:03:15 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:15.315340 | orchestrator | 2025-04-01 20:03:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:15.316040 | orchestrator | 2025-04-01 20:03:15 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:15.317013 | orchestrator | 2025-04-01 20:03:15 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:15.318251 | orchestrator | 2025-04-01 20:03:15 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:18.385796 | orchestrator | 2025-04-01 20:03:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:18.385925 | orchestrator | 2025-04-01 20:03:18 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:18.387587 | orchestrator | 2025-04-01 20:03:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:18.388005 | orchestrator | 2025-04-01 20:03:18 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:18.388030 | orchestrator | 2025-04-01 20:03:18 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:18.391598 | orchestrator | 2025-04-01 20:03:18 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:21.448894 | orchestrator | 2025-04-01 20:03:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:21.449064 | orchestrator | 2025-04-01 20:03:21 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:21.449563 | orchestrator | 2025-04-01 20:03:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:21.450241 | orchestrator | 2025-04-01 20:03:21 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:21.451297 | orchestrator | 2025-04-01 20:03:21 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:21.452063 | orchestrator | 2025-04-01 20:03:21 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:21.452793 | orchestrator | 2025-04-01 20:03:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:24.497115 | orchestrator | 2025-04-01 20:03:24 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:24.500916 | orchestrator | 2025-04-01 20:03:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:24.503858 | orchestrator | 2025-04-01 20:03:24 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:24.506655 | orchestrator | 2025-04-01 20:03:24 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:24.509373 | orchestrator | 2025-04-01 20:03:24 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:27.562566 | orchestrator | 2025-04-01 20:03:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:27.562718 | orchestrator | 2025-04-01 20:03:27 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:27.564517 | orchestrator | 2025-04-01 20:03:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:27.565659 | orchestrator | 2025-04-01 20:03:27 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:27.565685 | orchestrator | 2025-04-01 20:03:27 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:27.566576 | orchestrator | 2025-04-01 20:03:27 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:30.646133 | orchestrator | 2025-04-01 20:03:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:30.646283 | orchestrator | 2025-04-01 20:03:30 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:30.651756 | orchestrator | 2025-04-01 20:03:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:30.652514 | orchestrator | 2025-04-01 20:03:30 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state STARTED 2025-04-01 20:03:30.653379 | orchestrator | 2025-04-01 20:03:30 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:30.654387 | orchestrator | 2025-04-01 20:03:30 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:33.696735 | orchestrator | 2025-04-01 20:03:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:33.696949 | orchestrator | 2025-04-01 20:03:33 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:33.698191 | orchestrator | 2025-04-01 20:03:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:33.699754 | orchestrator | 2025-04-01 20:03:33 | INFO  | Task 82cb2d45-0a00-472b-b83d-93043e4c0aa4 is in state SUCCESS 2025-04-01 20:03:33.701372 | orchestrator | 2025-04-01 20:03:33.701412 | orchestrator | 2025-04-01 20:03:33.701428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:03:33.701443 | orchestrator | 2025-04-01 20:03:33.701458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:03:33.701472 | orchestrator | Tuesday 01 April 2025 19:59:49 +0000 (0:00:00.649) 0:00:00.649 ********* 2025-04-01 20:03:33.701486 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:03:33.701576 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:03:33.701592 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:03:33.701606 | orchestrator | ok: [testbed-node-3] 2025-04-01 20:03:33.701620 | orchestrator | ok: [testbed-node-4] 2025-04-01 20:03:33.701634 | orchestrator | ok: [testbed-node-5] 2025-04-01 20:03:33.701648 | orchestrator | 2025-04-01 20:03:33.701663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:03:33.701677 | orchestrator | Tuesday 01 April 2025 19:59:51 +0000 (0:00:01.781) 0:00:02.431 ********* 2025-04-01 20:03:33.701691 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-04-01 20:03:33.701706 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-04-01 20:03:33.701720 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-04-01 20:03:33.701734 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-04-01 20:03:33.701748 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-04-01 20:03:33.701789 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-04-01 20:03:33.701804 | orchestrator | 2025-04-01 20:03:33.701819 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-04-01 20:03:33.701833 | orchestrator | 2025-04-01 20:03:33.701848 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-01 20:03:33.703012 | orchestrator | Tuesday 01 April 2025 19:59:52 +0000 (0:00:01.326) 0:00:03.757 ********* 2025-04-01 20:03:33.703027 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:03:33.703044 | orchestrator | 2025-04-01 20:03:33.703059 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-04-01 20:03:33.703073 | orchestrator | Tuesday 01 April 2025 19:59:54 +0000 (0:00:01.566) 0:00:05.324 ********* 2025-04-01 20:03:33.703088 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-04-01 20:03:33.703102 | orchestrator | 2025-04-01 20:03:33.703116 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-04-01 20:03:33.703130 | orchestrator | Tuesday 01 April 2025 19:59:57 +0000 (0:00:03.820) 0:00:09.145 ********* 2025-04-01 20:03:33.703145 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-04-01 20:03:33.703160 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-04-01 20:03:33.703203 | orchestrator | 2025-04-01 20:03:33.703218 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-04-01 20:03:33.703232 | orchestrator | Tuesday 01 April 2025 20:00:05 +0000 (0:00:07.621) 0:00:16.766 ********* 2025-04-01 20:03:33.703246 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 20:03:33.703260 | orchestrator | 2025-04-01 20:03:33.703274 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-04-01 20:03:33.703302 | orchestrator | Tuesday 01 April 2025 20:00:09 +0000 (0:00:03.881) 0:00:20.647 ********* 2025-04-01 20:03:33.703317 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 20:03:33.703331 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-04-01 20:03:33.703345 | orchestrator | 2025-04-01 20:03:33.703359 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-04-01 20:03:33.703373 | orchestrator | Tuesday 01 April 2025 20:00:14 +0000 (0:00:04.843) 0:00:25.491 ********* 2025-04-01 20:03:33.703387 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 20:03:33.703401 | orchestrator | 2025-04-01 20:03:33.703415 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-04-01 20:03:33.703429 | orchestrator | Tuesday 01 April 2025 20:00:17 +0000 (0:00:03.006) 0:00:28.498 ********* 2025-04-01 20:03:33.703443 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-04-01 20:03:33.703457 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-04-01 20:03:33.703471 | orchestrator | 2025-04-01 20:03:33.703485 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-04-01 20:03:33.703499 | orchestrator | Tuesday 01 April 2025 20:00:25 +0000 (0:00:07.771) 0:00:36.269 ********* 2025-04-01 20:03:33.703564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.703586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.703643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.703671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.703687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.703703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.703756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.703811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.703837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.703854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.703871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.703923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.703954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.703979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.703995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.704010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.704060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.704090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.704114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.704129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.704145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.704159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.704217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.704236 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.704264 | orchestrator | 2025-04-01 20:03:33.704280 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-01 20:03:33.704294 | orchestrator | Tuesday 01 April 2025 20:00:27 +0000 (0:00:02.797) 0:00:39.066 ********* 2025-04-01 20:03:33.704308 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.704323 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.704338 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.704353 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:03:33.704367 | orchestrator | 2025-04-01 20:03:33.704453 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-04-01 20:03:33.704475 | orchestrator | Tuesday 01 April 2025 20:00:30 +0000 (0:00:02.168) 0:00:41.234 ********* 2025-04-01 20:03:33.704489 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-04-01 20:03:33.704503 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-04-01 20:03:33.704517 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-04-01 20:03:33.704531 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-04-01 20:03:33.704545 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-04-01 20:03:33.704559 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-04-01 20:03:33.704573 | orchestrator | 2025-04-01 20:03:33.704587 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-04-01 20:03:33.704601 | orchestrator | Tuesday 01 April 2025 20:00:34 +0000 (0:00:04.283) 0:00:45.518 ********* 2025-04-01 20:03:33.704616 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-01 20:03:33.704634 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-01 20:03:33.704683 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-01 20:03:33.704709 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-01 20:03:33.704724 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-01 20:03:33.704753 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-01 20:03:33.704832 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-01 20:03:33.704900 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-01 20:03:33.704919 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-01 20:03:33.704934 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-01 20:03:33.704950 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-01 20:03:33.704994 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-01 20:03:33.705018 | orchestrator | 2025-04-01 20:03:33.705033 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-04-01 20:03:33.705047 | orchestrator | Tuesday 01 April 2025 20:00:40 +0000 (0:00:05.987) 0:00:51.506 ********* 2025-04-01 20:03:33.705061 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:33.705076 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:33.705090 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:33.705106 | orchestrator | 2025-04-01 20:03:33.705121 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-04-01 20:03:33.705135 | orchestrator | Tuesday 01 April 2025 20:00:44 +0000 (0:00:04.585) 0:00:56.091 ********* 2025-04-01 20:03:33.705149 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-04-01 20:03:33.705163 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-04-01 20:03:33.705176 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-04-01 20:03:33.705190 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-04-01 20:03:33.705204 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-04-01 20:03:33.705218 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-04-01 20:03:33.705232 | orchestrator | 2025-04-01 20:03:33.705245 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-04-01 20:03:33.705259 | orchestrator | Tuesday 01 April 2025 20:00:52 +0000 (0:00:07.563) 0:01:03.654 ********* 2025-04-01 20:03:33.705272 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-04-01 20:03:33.705286 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-04-01 20:03:33.705301 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-04-01 20:03:33.705314 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-04-01 20:03:33.705328 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-04-01 20:03:33.705342 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-04-01 20:03:33.705357 | orchestrator | 2025-04-01 20:03:33.705372 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-04-01 20:03:33.705386 | orchestrator | Tuesday 01 April 2025 20:00:54 +0000 (0:00:02.441) 0:01:06.096 ********* 2025-04-01 20:03:33.705400 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.705414 | orchestrator | 2025-04-01 20:03:33.705428 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-04-01 20:03:33.705442 | orchestrator | Tuesday 01 April 2025 20:00:55 +0000 (0:00:00.429) 0:01:06.526 ********* 2025-04-01 20:03:33.705456 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.705470 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.705483 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.705495 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.705508 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.705520 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.705532 | orchestrator | 2025-04-01 20:03:33.705545 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-01 20:03:33.705557 | orchestrator | Tuesday 01 April 2025 20:00:57 +0000 (0:00:02.569) 0:01:09.095 ********* 2025-04-01 20:03:33.705572 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:03:33.705586 | orchestrator | 2025-04-01 20:03:33.705599 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-04-01 20:03:33.705618 | orchestrator | Tuesday 01 April 2025 20:01:00 +0000 (0:00:02.208) 0:01:11.304 ********* 2025-04-01 20:03:33.705631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.705688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.705704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.705718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.705742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.705779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.705824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.706150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.706171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.706185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.706207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.706220 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.706233 | orchestrator | 2025-04-01 20:03:33.706246 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-04-01 20:03:33.706259 | orchestrator | Tuesday 01 April 2025 20:01:04 +0000 (0:00:04.865) 0:01:16.169 ********* 2025-04-01 20:03:33.706313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.706329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.706364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706377 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.706389 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.706403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.706446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706461 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.706474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706511 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.706524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706550 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.706592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706620 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.706633 | orchestrator | 2025-04-01 20:03:33.706646 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-04-01 20:03:33.706659 | orchestrator | Tuesday 01 April 2025 20:01:08 +0000 (0:00:03.425) 0:01:19.595 ********* 2025-04-01 20:03:33.706671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.706691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.706745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706809 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.706825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.706839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706859 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.706872 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.706885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706911 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.706955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.706981 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.706992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707019 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.707029 | orchestrator | 2025-04-01 20:03:33.707040 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-04-01 20:03:33.707050 | orchestrator | Tuesday 01 April 2025 20:01:13 +0000 (0:00:05.419) 0:01:25.014 ********* 2025-04-01 20:03:33.707060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.707094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.707123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.707134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.707144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.707189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.707229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707452 | orchestrator | 2025-04-01 20:03:33.707462 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-04-01 20:03:33.707472 | orchestrator | Tuesday 01 April 2025 20:01:18 +0000 (0:00:04.787) 0:01:29.802 ********* 2025-04-01 20:03:33.707482 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-01 20:03:33.707493 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.707503 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-01 20:03:33.707513 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.707524 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-01 20:03:33.707534 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.707544 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-01 20:03:33.707554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-01 20:03:33.707564 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-01 20:03:33.707574 | orchestrator | 2025-04-01 20:03:33.707585 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-04-01 20:03:33.707595 | orchestrator | Tuesday 01 April 2025 20:01:24 +0000 (0:00:05.505) 0:01:35.307 ********* 2025-04-01 20:03:33.707605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.707616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.707632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.707669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.707691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.707711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.707723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.707888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.707925 | orchestrator | 2025-04-01 20:03:33.707939 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-04-01 20:03:33.707950 | orchestrator | Tuesday 01 April 2025 20:01:44 +0000 (0:00:20.722) 0:01:56.030 ********* 2025-04-01 20:03:33.707961 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.707971 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.707986 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.707997 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:03:33.708007 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:03:33.708017 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:03:33.708027 | orchestrator | 2025-04-01 20:03:33.708037 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-04-01 20:03:33.708047 | orchestrator | Tuesday 01 April 2025 20:01:53 +0000 (0:00:08.517) 0:02:04.548 ********* 2025-04-01 20:03:33.708057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708159 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.708169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708216 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.708226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708242 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.708252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708290 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.708301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708350 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.708365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708414 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.708424 | orchestrator | 2025-04-01 20:03:33.708434 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-04-01 20:03:33.708445 | orchestrator | Tuesday 01 April 2025 20:01:56 +0000 (0:00:03.640) 0:02:08.189 ********* 2025-04-01 20:03:33.708455 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.708465 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.708475 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.708485 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.708495 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.708505 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.708515 | orchestrator | 2025-04-01 20:03:33.708525 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-04-01 20:03:33.708536 | orchestrator | Tuesday 01 April 2025 20:01:58 +0000 (0:00:01.564) 0:02:09.753 ********* 2025-04-01 20:03:33.708550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.708598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.708613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-01 20:03:33.708634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-01 20:03:33.708650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-01 20:03:33.708833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708844 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-01 20:03:33.708870 | orchestrator | 2025-04-01 20:03:33.708880 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-01 20:03:33.708890 | orchestrator | Tuesday 01 April 2025 20:02:03 +0000 (0:00:04.764) 0:02:14.517 ********* 2025-04-01 20:03:33.708901 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:33.708911 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:33.708921 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:33.708931 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:03:33.708941 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:03:33.708951 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:03:33.708962 | orchestrator | 2025-04-01 20:03:33.708972 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-04-01 20:03:33.708982 | orchestrator | Tuesday 01 April 2025 20:02:05 +0000 (0:00:01.706) 0:02:16.224 ********* 2025-04-01 20:03:33.708992 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:33.709002 | orchestrator | 2025-04-01 20:03:33.709012 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-04-01 20:03:33.709022 | orchestrator | Tuesday 01 April 2025 20:02:07 +0000 (0:00:02.782) 0:02:19.006 ********* 2025-04-01 20:03:33.709032 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:33.709042 | orchestrator | 2025-04-01 20:03:33.709052 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-04-01 20:03:33.709062 | orchestrator | Tuesday 01 April 2025 20:02:10 +0000 (0:00:02.675) 0:02:21.681 ********* 2025-04-01 20:03:33.709073 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:33.709083 | orchestrator | 2025-04-01 20:03:33.709093 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-01 20:03:33.709103 | orchestrator | Tuesday 01 April 2025 20:02:31 +0000 (0:00:20.776) 0:02:42.457 ********* 2025-04-01 20:03:33.709113 | orchestrator | 2025-04-01 20:03:33.709123 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-01 20:03:33.709134 | orchestrator | Tuesday 01 April 2025 20:02:31 +0000 (0:00:00.142) 0:02:42.600 ********* 2025-04-01 20:03:33.709144 | orchestrator | 2025-04-01 20:03:33.709154 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-01 20:03:33.709164 | orchestrator | Tuesday 01 April 2025 20:02:31 +0000 (0:00:00.416) 0:02:43.016 ********* 2025-04-01 20:03:33.709174 | orchestrator | 2025-04-01 20:03:33.709184 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-01 20:03:33.709194 | orchestrator | Tuesday 01 April 2025 20:02:31 +0000 (0:00:00.069) 0:02:43.086 ********* 2025-04-01 20:03:33.709204 | orchestrator | 2025-04-01 20:03:33.709215 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-01 20:03:33.709225 | orchestrator | Tuesday 01 April 2025 20:02:31 +0000 (0:00:00.065) 0:02:43.151 ********* 2025-04-01 20:03:33.709235 | orchestrator | 2025-04-01 20:03:33.709245 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-01 20:03:33.709255 | orchestrator | Tuesday 01 April 2025 20:02:32 +0000 (0:00:00.064) 0:02:43.215 ********* 2025-04-01 20:03:33.709265 | orchestrator | 2025-04-01 20:03:33.709275 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-04-01 20:03:33.709285 | orchestrator | Tuesday 01 April 2025 20:02:32 +0000 (0:00:00.292) 0:02:43.508 ********* 2025-04-01 20:03:33.709295 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:33.709305 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:03:33.709321 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:03:33.709331 | orchestrator | 2025-04-01 20:03:33.709342 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-04-01 20:03:33.709352 | orchestrator | Tuesday 01 April 2025 20:02:52 +0000 (0:00:20.074) 0:03:03.582 ********* 2025-04-01 20:03:33.709362 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:33.709372 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:03:33.709382 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:03:33.709392 | orchestrator | 2025-04-01 20:03:33.709402 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-04-01 20:03:33.709416 | orchestrator | Tuesday 01 April 2025 20:02:58 +0000 (0:00:06.032) 0:03:09.615 ********* 2025-04-01 20:03:36.745558 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:03:36.745678 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:03:36.745698 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:03:36.745714 | orchestrator | 2025-04-01 20:03:36.745731 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-04-01 20:03:36.745747 | orchestrator | Tuesday 01 April 2025 20:03:18 +0000 (0:00:19.969) 0:03:29.584 ********* 2025-04-01 20:03:36.745796 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:03:36.745812 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:03:36.745827 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:03:36.745841 | orchestrator | 2025-04-01 20:03:36.745855 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-04-01 20:03:36.745870 | orchestrator | Tuesday 01 April 2025 20:03:31 +0000 (0:00:13.124) 0:03:42.709 ********* 2025-04-01 20:03:36.745885 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:36.745899 | orchestrator | 2025-04-01 20:03:36.745913 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:03:36.745929 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-01 20:03:36.745945 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-01 20:03:36.745959 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-01 20:03:36.745973 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:03:36.745987 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:03:36.746001 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:03:36.746015 | orchestrator | 2025-04-01 20:03:36.746082 | orchestrator | 2025-04-01 20:03:36.746099 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:03:36.746116 | orchestrator | Tuesday 01 April 2025 20:03:32 +0000 (0:00:00.819) 0:03:43.528 ********* 2025-04-01 20:03:36.746132 | orchestrator | =============================================================================== 2025-04-01 20:03:36.746149 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.78s 2025-04-01 20:03:36.746165 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 20.72s 2025-04-01 20:03:36.746181 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.07s 2025-04-01 20:03:36.746198 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.97s 2025-04-01 20:03:36.746213 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.12s 2025-04-01 20:03:36.746230 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 8.52s 2025-04-01 20:03:36.746246 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.77s 2025-04-01 20:03:36.746290 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.62s 2025-04-01 20:03:36.746308 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 7.56s 2025-04-01 20:03:36.746324 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.03s 2025-04-01 20:03:36.746340 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.99s 2025-04-01 20:03:36.746356 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 5.51s 2025-04-01 20:03:36.746373 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 5.42s 2025-04-01 20:03:36.746402 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.87s 2025-04-01 20:03:36.746419 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.84s 2025-04-01 20:03:36.746436 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.79s 2025-04-01 20:03:36.746451 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.76s 2025-04-01 20:03:36.746466 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 4.59s 2025-04-01 20:03:36.746481 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.28s 2025-04-01 20:03:36.746496 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.88s 2025-04-01 20:03:36.746511 | orchestrator | 2025-04-01 20:03:33 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:36.746527 | orchestrator | 2025-04-01 20:03:33 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:36.746543 | orchestrator | 2025-04-01 20:03:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:36.746576 | orchestrator | 2025-04-01 20:03:36 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:36.750601 | orchestrator | 2025-04-01 20:03:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:36.753881 | orchestrator | 2025-04-01 20:03:36 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:36.754663 | orchestrator | 2025-04-01 20:03:36 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:36.755579 | orchestrator | 2025-04-01 20:03:36 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:39.797173 | orchestrator | 2025-04-01 20:03:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:39.797312 | orchestrator | 2025-04-01 20:03:39 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:39.798186 | orchestrator | 2025-04-01 20:03:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:39.798224 | orchestrator | 2025-04-01 20:03:39 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:39.801287 | orchestrator | 2025-04-01 20:03:39 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:39.801639 | orchestrator | 2025-04-01 20:03:39 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:39.801805 | orchestrator | 2025-04-01 20:03:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:42.871904 | orchestrator | 2025-04-01 20:03:42 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:42.874061 | orchestrator | 2025-04-01 20:03:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:42.876586 | orchestrator | 2025-04-01 20:03:42 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:42.878669 | orchestrator | 2025-04-01 20:03:42 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:42.879681 | orchestrator | 2025-04-01 20:03:42 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:42.880669 | orchestrator | 2025-04-01 20:03:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:45.919928 | orchestrator | 2025-04-01 20:03:45 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:45.922609 | orchestrator | 2025-04-01 20:03:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:45.925912 | orchestrator | 2025-04-01 20:03:45 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:45.928137 | orchestrator | 2025-04-01 20:03:45 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state STARTED 2025-04-01 20:03:48.973478 | orchestrator | 2025-04-01 20:03:45 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:48.973598 | orchestrator | 2025-04-01 20:03:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:48.973634 | orchestrator | 2025-04-01 20:03:48.973650 | orchestrator | 2025-04-01 20:03:48.973665 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:03:48.973680 | orchestrator | 2025-04-01 20:03:48.973694 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:03:48.973709 | orchestrator | Tuesday 01 April 2025 19:59:23 +0000 (0:00:00.361) 0:00:00.361 ********* 2025-04-01 20:03:48.973723 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:03:48.973739 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:03:48.973883 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:03:48.973899 | orchestrator | 2025-04-01 20:03:48.973914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:03:48.973928 | orchestrator | Tuesday 01 April 2025 19:59:23 +0000 (0:00:00.548) 0:00:00.909 ********* 2025-04-01 20:03:48.973943 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-04-01 20:03:48.973957 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-04-01 20:03:48.973972 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-04-01 20:03:48.973986 | orchestrator | 2025-04-01 20:03:48.974000 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-04-01 20:03:48.974431 | orchestrator | 2025-04-01 20:03:48.974463 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-01 20:03:48.974478 | orchestrator | Tuesday 01 April 2025 19:59:24 +0000 (0:00:00.400) 0:00:01.310 ********* 2025-04-01 20:03:48.974492 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:03:48.974507 | orchestrator | 2025-04-01 20:03:48.974522 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-04-01 20:03:48.974536 | orchestrator | Tuesday 01 April 2025 19:59:25 +0000 (0:00:01.074) 0:00:02.385 ********* 2025-04-01 20:03:48.974551 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-04-01 20:03:48.974565 | orchestrator | 2025-04-01 20:03:48.974579 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-04-01 20:03:48.974593 | orchestrator | Tuesday 01 April 2025 19:59:29 +0000 (0:00:03.995) 0:00:06.380 ********* 2025-04-01 20:03:48.974607 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-04-01 20:03:48.974621 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-04-01 20:03:48.974635 | orchestrator | 2025-04-01 20:03:48.974650 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-04-01 20:03:48.974664 | orchestrator | Tuesday 01 April 2025 19:59:36 +0000 (0:00:07.131) 0:00:13.512 ********* 2025-04-01 20:03:48.974678 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 20:03:48.974721 | orchestrator | 2025-04-01 20:03:48.974736 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-04-01 20:03:48.974751 | orchestrator | Tuesday 01 April 2025 19:59:40 +0000 (0:00:04.322) 0:00:17.835 ********* 2025-04-01 20:03:48.974789 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 20:03:48.974804 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-04-01 20:03:48.974819 | orchestrator | 2025-04-01 20:03:48.974833 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-04-01 20:03:48.974847 | orchestrator | Tuesday 01 April 2025 19:59:45 +0000 (0:00:04.874) 0:00:22.709 ********* 2025-04-01 20:03:48.974861 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 20:03:48.974876 | orchestrator | 2025-04-01 20:03:48.974890 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-04-01 20:03:48.974904 | orchestrator | Tuesday 01 April 2025 19:59:49 +0000 (0:00:03.911) 0:00:26.621 ********* 2025-04-01 20:03:48.974918 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-04-01 20:03:48.974932 | orchestrator | 2025-04-01 20:03:48.974946 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-04-01 20:03:48.974960 | orchestrator | Tuesday 01 April 2025 19:59:54 +0000 (0:00:04.750) 0:00:31.371 ********* 2025-04-01 20:03:48.974992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.975013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.975040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.975069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.975096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.975122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.975147 | orchestrator | 2025-04-01 20:03:48.975164 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-01 20:03:48.975180 | orchestrator | Tuesday 01 April 2025 19:59:59 +0000 (0:00:05.499) 0:00:36.871 ********* 2025-04-01 20:03:48.975195 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:03:48.975211 | orchestrator | 2025-04-01 20:03:48.975227 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-04-01 20:03:48.975257 | orchestrator | Tuesday 01 April 2025 20:00:02 +0000 (0:00:02.145) 0:00:39.017 ********* 2025-04-01 20:03:48.975273 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.975290 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:03:48.975306 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:03:48.975321 | orchestrator | 2025-04-01 20:03:48.975337 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-04-01 20:03:48.975352 | orchestrator | Tuesday 01 April 2025 20:00:15 +0000 (0:00:13.552) 0:00:52.569 ********* 2025-04-01 20:03:48.975368 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:48.975383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:48.975399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:48.975413 | orchestrator | 2025-04-01 20:03:48.975427 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-04-01 20:03:48.975441 | orchestrator | Tuesday 01 April 2025 20:00:17 +0000 (0:00:01.969) 0:00:54.539 ********* 2025-04-01 20:03:48.975455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:48.975469 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:48.975484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-01 20:03:48.975498 | orchestrator | 2025-04-01 20:03:48.975512 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-04-01 20:03:48.975526 | orchestrator | Tuesday 01 April 2025 20:00:18 +0000 (0:00:01.324) 0:00:55.864 ********* 2025-04-01 20:03:48.975540 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:03:48.975559 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:03:48.975574 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:03:48.975589 | orchestrator | 2025-04-01 20:03:48.975603 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-04-01 20:03:48.975617 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.693) 0:00:56.558 ********* 2025-04-01 20:03:48.975631 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.975646 | orchestrator | 2025-04-01 20:03:48.975660 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-04-01 20:03:48.975674 | orchestrator | Tuesday 01 April 2025 20:00:19 +0000 (0:00:00.311) 0:00:56.870 ********* 2025-04-01 20:03:48.975688 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.975702 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.975716 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.975736 | orchestrator | 2025-04-01 20:03:48.975750 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-01 20:03:48.975784 | orchestrator | Tuesday 01 April 2025 20:00:20 +0000 (0:00:00.365) 0:00:57.235 ********* 2025-04-01 20:03:48.975800 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:03:48.975814 | orchestrator | 2025-04-01 20:03:48.975828 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-04-01 20:03:48.975850 | orchestrator | Tuesday 01 April 2025 20:00:21 +0000 (0:00:00.912) 0:00:58.148 ********* 2025-04-01 20:03:48.975873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.975891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.975915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.975938 | orchestrator | 2025-04-01 20:03:48.975953 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-04-01 20:03:48.975967 | orchestrator | Tuesday 01 April 2025 20:00:26 +0000 (0:00:05.560) 0:01:03.709 ********* 2025-04-01 20:03:48.975982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 20:03:48.975997 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 20:03:48.976042 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.976057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 20:03:48.976073 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.976087 | orchestrator | 2025-04-01 20:03:48.976101 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-04-01 20:03:48.976115 | orchestrator | Tuesday 01 April 2025 20:00:33 +0000 (0:00:06.638) 0:01:10.347 ********* 2025-04-01 20:03:48.976138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 20:03:48.976160 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.976175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 20:03:48.976191 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-01 20:03:48.976233 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.976248 | orchestrator | 2025-04-01 20:03:48.976262 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-04-01 20:03:48.976276 | orchestrator | Tuesday 01 April 2025 20:00:41 +0000 (0:00:08.080) 0:01:18.428 ********* 2025-04-01 20:03:48.976290 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.976305 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976319 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.976333 | orchestrator | 2025-04-01 20:03:48.976353 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-04-01 20:03:48.976368 | orchestrator | Tuesday 01 April 2025 20:00:55 +0000 (0:00:13.597) 0:01:32.025 ********* 2025-04-01 20:03:48.976382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.976399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.976430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.976447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.976484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.976501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.976516 | orchestrator | 2025-04-01 20:03:48.976531 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-04-01 20:03:48.976545 | orchestrator | Tuesday 01 April 2025 20:01:03 +0000 (0:00:08.556) 0:01:40.582 ********* 2025-04-01 20:03:48.976566 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:03:48.976580 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:03:48.976594 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.976608 | orchestrator | 2025-04-01 20:03:48.976623 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-04-01 20:03:48.976637 | orchestrator | Tuesday 01 April 2025 20:01:23 +0000 (0:00:20.028) 0:02:00.610 ********* 2025-04-01 20:03:48.976651 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976665 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.976679 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.976693 | orchestrator | 2025-04-01 20:03:48.976707 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-04-01 20:03:48.976721 | orchestrator | Tuesday 01 April 2025 20:01:48 +0000 (0:00:25.231) 0:02:25.841 ********* 2025-04-01 20:03:48.976735 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.976749 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976815 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.976832 | orchestrator | 2025-04-01 20:03:48.976846 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-04-01 20:03:48.976861 | orchestrator | Tuesday 01 April 2025 20:02:04 +0000 (0:00:15.569) 0:02:41.411 ********* 2025-04-01 20:03:48.976875 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976889 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.976903 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.976917 | orchestrator | 2025-04-01 20:03:48.976932 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-04-01 20:03:48.976952 | orchestrator | Tuesday 01 April 2025 20:02:14 +0000 (0:00:09.663) 0:02:51.074 ********* 2025-04-01 20:03:48.976966 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.976987 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.977002 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.977016 | orchestrator | 2025-04-01 20:03:48.977030 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-04-01 20:03:48.977044 | orchestrator | Tuesday 01 April 2025 20:02:23 +0000 (0:00:09.699) 0:03:00.774 ********* 2025-04-01 20:03:48.977058 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.977072 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.977086 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.977100 | orchestrator | 2025-04-01 20:03:48.977114 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-04-01 20:03:48.977126 | orchestrator | Tuesday 01 April 2025 20:02:24 +0000 (0:00:00.480) 0:03:01.255 ********* 2025-04-01 20:03:48.977139 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-01 20:03:48.977152 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.977164 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-01 20:03:48.977177 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.977190 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-01 20:03:48.977202 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.977215 | orchestrator | 2025-04-01 20:03:48.977227 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-04-01 20:03:48.977240 | orchestrator | Tuesday 01 April 2025 20:02:28 +0000 (0:00:04.006) 0:03:05.261 ********* 2025-04-01 20:03:48.977253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.977281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.977296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.977316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-01 20:03:48.977339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.977360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-01 20:03:48.977373 | orchestrator | 2025-04-01 20:03:48.977386 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-01 20:03:48.977399 | orchestrator | Tuesday 01 April 2025 20:02:34 +0000 (0:00:05.954) 0:03:11.215 ********* 2025-04-01 20:03:48.977411 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:03:48.977424 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:03:48.977437 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:03:48.977449 | orchestrator | 2025-04-01 20:03:48.977466 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-04-01 20:03:48.977479 | orchestrator | Tuesday 01 April 2025 20:02:35 +0000 (0:00:01.309) 0:03:12.524 ********* 2025-04-01 20:03:48.977492 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.977504 | orchestrator | 2025-04-01 20:03:48.977517 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-04-01 20:03:48.977530 | orchestrator | Tuesday 01 April 2025 20:02:37 +0000 (0:00:02.404) 0:03:14.929 ********* 2025-04-01 20:03:48.977542 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.977555 | orchestrator | 2025-04-01 20:03:48.977567 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-04-01 20:03:48.977580 | orchestrator | Tuesday 01 April 2025 20:02:40 +0000 (0:00:02.537) 0:03:17.466 ********* 2025-04-01 20:03:48.977593 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.977605 | orchestrator | 2025-04-01 20:03:48.977618 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-04-01 20:03:48.977630 | orchestrator | Tuesday 01 April 2025 20:02:42 +0000 (0:00:02.358) 0:03:19.824 ********* 2025-04-01 20:03:48.977643 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.977661 | orchestrator | 2025-04-01 20:03:48.977674 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-04-01 20:03:48.977686 | orchestrator | Tuesday 01 April 2025 20:03:08 +0000 (0:00:25.534) 0:03:45.359 ********* 2025-04-01 20:03:48.977699 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.977711 | orchestrator | 2025-04-01 20:03:48.977724 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-01 20:03:48.977736 | orchestrator | Tuesday 01 April 2025 20:03:10 +0000 (0:00:01.875) 0:03:47.234 ********* 2025-04-01 20:03:48.977749 | orchestrator | 2025-04-01 20:03:48.977775 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-01 20:03:48.977788 | orchestrator | Tuesday 01 April 2025 20:03:10 +0000 (0:00:00.058) 0:03:47.292 ********* 2025-04-01 20:03:48.977801 | orchestrator | 2025-04-01 20:03:48.977814 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-01 20:03:48.977827 | orchestrator | Tuesday 01 April 2025 20:03:10 +0000 (0:00:00.059) 0:03:47.351 ********* 2025-04-01 20:03:48.977839 | orchestrator | 2025-04-01 20:03:48.977851 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-04-01 20:03:48.977864 | orchestrator | Tuesday 01 April 2025 20:03:10 +0000 (0:00:00.212) 0:03:47.564 ********* 2025-04-01 20:03:48.977876 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:03:48.977889 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:03:48.977901 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:03:48.977914 | orchestrator | 2025-04-01 20:03:48.977926 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:03:48.977940 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-04-01 20:03:48.977954 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-01 20:03:48.977967 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-01 20:03:48.977979 | orchestrator | 2025-04-01 20:03:48.977992 | orchestrator | 2025-04-01 20:03:48.978005 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:03:48.978044 | orchestrator | Tuesday 01 April 2025 20:03:47 +0000 (0:00:36.600) 0:04:24.165 ********* 2025-04-01 20:03:48.978059 | orchestrator | =============================================================================== 2025-04-01 20:03:48.978072 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.60s 2025-04-01 20:03:48.978090 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.53s 2025-04-01 20:03:48.978103 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 25.23s 2025-04-01 20:03:48.978116 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 20.03s 2025-04-01 20:03:48.978128 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 15.57s 2025-04-01 20:03:48.978141 | orchestrator | glance : Creating TLS backend PEM File --------------------------------- 13.60s 2025-04-01 20:03:48.978153 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 13.55s 2025-04-01 20:03:48.978166 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 9.70s 2025-04-01 20:03:48.978178 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 9.66s 2025-04-01 20:03:48.978191 | orchestrator | glance : Copying over config.json files for services -------------------- 8.56s 2025-04-01 20:03:48.978203 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 8.08s 2025-04-01 20:03:48.978216 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.13s 2025-04-01 20:03:48.978228 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.64s 2025-04-01 20:03:48.978241 | orchestrator | glance : Check glance containers ---------------------------------------- 5.95s 2025-04-01 20:03:48.978260 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.56s 2025-04-01 20:03:48.978272 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.50s 2025-04-01 20:03:48.978285 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.87s 2025-04-01 20:03:48.978297 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.75s 2025-04-01 20:03:48.978310 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.32s 2025-04-01 20:03:48.978328 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.01s 2025-04-01 20:03:52.034713 | orchestrator | 2025-04-01 20:03:48 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:52.035002 | orchestrator | 2025-04-01 20:03:48 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:03:52.035035 | orchestrator | 2025-04-01 20:03:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:52.035051 | orchestrator | 2025-04-01 20:03:48 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:52.035066 | orchestrator | 2025-04-01 20:03:48 | INFO  | Task 70fa12a7-5b3b-4401-b401-3b53d8bed74d is in state SUCCESS 2025-04-01 20:03:52.035081 | orchestrator | 2025-04-01 20:03:48 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:52.035097 | orchestrator | 2025-04-01 20:03:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:52.035131 | orchestrator | 2025-04-01 20:03:52 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:55.089285 | orchestrator | 2025-04-01 20:03:52 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:03:55.089389 | orchestrator | 2025-04-01 20:03:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:55.089407 | orchestrator | 2025-04-01 20:03:52 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:55.089421 | orchestrator | 2025-04-01 20:03:52 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:55.089437 | orchestrator | 2025-04-01 20:03:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:55.089468 | orchestrator | 2025-04-01 20:03:55 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:55.089898 | orchestrator | 2025-04-01 20:03:55 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:03:55.091042 | orchestrator | 2025-04-01 20:03:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:55.092476 | orchestrator | 2025-04-01 20:03:55 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:03:55.094935 | orchestrator | 2025-04-01 20:03:55 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:03:58.154065 | orchestrator | 2025-04-01 20:03:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:03:58.154247 | orchestrator | 2025-04-01 20:03:58 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:03:58.155287 | orchestrator | 2025-04-01 20:03:58 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:03:58.155440 | orchestrator | 2025-04-01 20:03:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:03:58.161921 | orchestrator | 2025-04-01 20:03:58 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:01.200443 | orchestrator | 2025-04-01 20:03:58 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:01.200596 | orchestrator | 2025-04-01 20:03:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:01.200628 | orchestrator | 2025-04-01 20:04:01 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:04:01.203724 | orchestrator | 2025-04-01 20:04:01 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:01.204045 | orchestrator | 2025-04-01 20:04:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:01.204068 | orchestrator | 2025-04-01 20:04:01 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:01.204811 | orchestrator | 2025-04-01 20:04:01 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:04.257855 | orchestrator | 2025-04-01 20:04:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:04.257994 | orchestrator | 2025-04-01 20:04:04 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:04:04.259217 | orchestrator | 2025-04-01 20:04:04 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:04.263117 | orchestrator | 2025-04-01 20:04:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:04.264754 | orchestrator | 2025-04-01 20:04:04 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:04.267012 | orchestrator | 2025-04-01 20:04:04 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:04.268809 | orchestrator | 2025-04-01 20:04:04 | INFO  | Task 1f279974-ffc1-4e01-90a7-e228c13a1c05 is in state STARTED 2025-04-01 20:04:04.269275 | orchestrator | 2025-04-01 20:04:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:07.328203 | orchestrator | 2025-04-01 20:04:07 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:04:07.330158 | orchestrator | 2025-04-01 20:04:07 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:07.333643 | orchestrator | 2025-04-01 20:04:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:07.335703 | orchestrator | 2025-04-01 20:04:07 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:07.336787 | orchestrator | 2025-04-01 20:04:07 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:07.337927 | orchestrator | 2025-04-01 20:04:07 | INFO  | Task 1f279974-ffc1-4e01-90a7-e228c13a1c05 is in state STARTED 2025-04-01 20:04:10.393377 | orchestrator | 2025-04-01 20:04:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:10.393552 | orchestrator | 2025-04-01 20:04:10 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:04:10.394332 | orchestrator | 2025-04-01 20:04:10 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:10.396292 | orchestrator | 2025-04-01 20:04:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:10.399306 | orchestrator | 2025-04-01 20:04:10 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:10.401909 | orchestrator | 2025-04-01 20:04:10 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:10.402880 | orchestrator | 2025-04-01 20:04:10 | INFO  | Task 1f279974-ffc1-4e01-90a7-e228c13a1c05 is in state STARTED 2025-04-01 20:04:10.403095 | orchestrator | 2025-04-01 20:04:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:13.450867 | orchestrator | 2025-04-01 20:04:13 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:04:13.452729 | orchestrator | 2025-04-01 20:04:13 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:13.453931 | orchestrator | 2025-04-01 20:04:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:13.455530 | orchestrator | 2025-04-01 20:04:13 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:13.457113 | orchestrator | 2025-04-01 20:04:13 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:13.459531 | orchestrator | 2025-04-01 20:04:13 | INFO  | Task 1f279974-ffc1-4e01-90a7-e228c13a1c05 is in state STARTED 2025-04-01 20:04:16.504107 | orchestrator | 2025-04-01 20:04:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:16.504280 | orchestrator | 2025-04-01 20:04:16 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state STARTED 2025-04-01 20:04:16.508361 | orchestrator | 2025-04-01 20:04:16 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:16.510176 | orchestrator | 2025-04-01 20:04:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:16.511676 | orchestrator | 2025-04-01 20:04:16 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:16.514729 | orchestrator | 2025-04-01 20:04:16 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:16.516001 | orchestrator | 2025-04-01 20:04:16 | INFO  | Task 1f279974-ffc1-4e01-90a7-e228c13a1c05 is in state SUCCESS 2025-04-01 20:04:16.516135 | orchestrator | 2025-04-01 20:04:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:19.560742 | orchestrator | 2025-04-01 20:04:19 | INFO  | Task eb08e7c6-640d-4f3d-b3c9-e63bb8df43c3 is in state SUCCESS 2025-04-01 20:04:19.561023 | orchestrator | 2025-04-01 20:04:19 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:19.564865 | orchestrator | 2025-04-01 20:04:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:19.568095 | orchestrator | 2025-04-01 20:04:19 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:19.569369 | orchestrator | 2025-04-01 20:04:19 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:22.624815 | orchestrator | 2025-04-01 20:04:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:22.898225 | orchestrator | 2025-04-01 20:04:22 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:25.677990 | orchestrator | 2025-04-01 20:04:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:25.678196 | orchestrator | 2025-04-01 20:04:22 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:25.678217 | orchestrator | 2025-04-01 20:04:22 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:25.678235 | orchestrator | 2025-04-01 20:04:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:25.678295 | orchestrator | 2025-04-01 20:04:25 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:25.679322 | orchestrator | 2025-04-01 20:04:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:25.683332 | orchestrator | 2025-04-01 20:04:25 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:25.685194 | orchestrator | 2025-04-01 20:04:25 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:28.732282 | orchestrator | 2025-04-01 20:04:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:28.732416 | orchestrator | 2025-04-01 20:04:28 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:28.734519 | orchestrator | 2025-04-01 20:04:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:28.734630 | orchestrator | 2025-04-01 20:04:28 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:28.736361 | orchestrator | 2025-04-01 20:04:28 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:31.789218 | orchestrator | 2025-04-01 20:04:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:31.789354 | orchestrator | 2025-04-01 20:04:31 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:31.790665 | orchestrator | 2025-04-01 20:04:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:31.792922 | orchestrator | 2025-04-01 20:04:31 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:31.794738 | orchestrator | 2025-04-01 20:04:31 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:34.844956 | orchestrator | 2025-04-01 20:04:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:34.845076 | orchestrator | 2025-04-01 20:04:34 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:34.848277 | orchestrator | 2025-04-01 20:04:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:34.850093 | orchestrator | 2025-04-01 20:04:34 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:34.852033 | orchestrator | 2025-04-01 20:04:34 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:37.902430 | orchestrator | 2025-04-01 20:04:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:37.902561 | orchestrator | 2025-04-01 20:04:37 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:37.907230 | orchestrator | 2025-04-01 20:04:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:37.908224 | orchestrator | 2025-04-01 20:04:37 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:37.910197 | orchestrator | 2025-04-01 20:04:37 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:37.910696 | orchestrator | 2025-04-01 20:04:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:40.968675 | orchestrator | 2025-04-01 20:04:40 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:40.970098 | orchestrator | 2025-04-01 20:04:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:40.972000 | orchestrator | 2025-04-01 20:04:40 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:40.973645 | orchestrator | 2025-04-01 20:04:40 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:44.023532 | orchestrator | 2025-04-01 20:04:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:44.023668 | orchestrator | 2025-04-01 20:04:44 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:44.025614 | orchestrator | 2025-04-01 20:04:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:44.027884 | orchestrator | 2025-04-01 20:04:44 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:44.030254 | orchestrator | 2025-04-01 20:04:44 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:47.066734 | orchestrator | 2025-04-01 20:04:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:47.066901 | orchestrator | 2025-04-01 20:04:47 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:47.069860 | orchestrator | 2025-04-01 20:04:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:47.072674 | orchestrator | 2025-04-01 20:04:47 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:47.075279 | orchestrator | 2025-04-01 20:04:47 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:47.076200 | orchestrator | 2025-04-01 20:04:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:50.122307 | orchestrator | 2025-04-01 20:04:50 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:50.123276 | orchestrator | 2025-04-01 20:04:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:50.124840 | orchestrator | 2025-04-01 20:04:50 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:50.134443 | orchestrator | 2025-04-01 20:04:50 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:53.180970 | orchestrator | 2025-04-01 20:04:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:53.181119 | orchestrator | 2025-04-01 20:04:53 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:53.183051 | orchestrator | 2025-04-01 20:04:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:53.184839 | orchestrator | 2025-04-01 20:04:53 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:53.186819 | orchestrator | 2025-04-01 20:04:53 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:56.231846 | orchestrator | 2025-04-01 20:04:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:56.231966 | orchestrator | 2025-04-01 20:04:56 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:56.232757 | orchestrator | 2025-04-01 20:04:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:56.232797 | orchestrator | 2025-04-01 20:04:56 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:56.235052 | orchestrator | 2025-04-01 20:04:56 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:04:59.286661 | orchestrator | 2025-04-01 20:04:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:04:59.286816 | orchestrator | 2025-04-01 20:04:59 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:04:59.290475 | orchestrator | 2025-04-01 20:04:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:04:59.291066 | orchestrator | 2025-04-01 20:04:59 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:04:59.291098 | orchestrator | 2025-04-01 20:04:59 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:02.350976 | orchestrator | 2025-04-01 20:04:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:02.351135 | orchestrator | 2025-04-01 20:05:02 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:02.351968 | orchestrator | 2025-04-01 20:05:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:02.353654 | orchestrator | 2025-04-01 20:05:02 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:02.354876 | orchestrator | 2025-04-01 20:05:02 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:02.355429 | orchestrator | 2025-04-01 20:05:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:05.408749 | orchestrator | 2025-04-01 20:05:05 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:05.411064 | orchestrator | 2025-04-01 20:05:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:05.412088 | orchestrator | 2025-04-01 20:05:05 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:05.412305 | orchestrator | 2025-04-01 20:05:05 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:05.413071 | orchestrator | 2025-04-01 20:05:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:08.468887 | orchestrator | 2025-04-01 20:05:08 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:08.470245 | orchestrator | 2025-04-01 20:05:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:08.471366 | orchestrator | 2025-04-01 20:05:08 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:08.472458 | orchestrator | 2025-04-01 20:05:08 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:11.518756 | orchestrator | 2025-04-01 20:05:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:11.518907 | orchestrator | 2025-04-01 20:05:11 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:11.519010 | orchestrator | 2025-04-01 20:05:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:11.520094 | orchestrator | 2025-04-01 20:05:11 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:11.520982 | orchestrator | 2025-04-01 20:05:11 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:14.561084 | orchestrator | 2025-04-01 20:05:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:14.561250 | orchestrator | 2025-04-01 20:05:14 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:14.563652 | orchestrator | 2025-04-01 20:05:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:14.564618 | orchestrator | 2025-04-01 20:05:14 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:14.565500 | orchestrator | 2025-04-01 20:05:14 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:17.615221 | orchestrator | 2025-04-01 20:05:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:17.615406 | orchestrator | 2025-04-01 20:05:17 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:17.617990 | orchestrator | 2025-04-01 20:05:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:20.660148 | orchestrator | 2025-04-01 20:05:17 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:20.660317 | orchestrator | 2025-04-01 20:05:17 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:20.660337 | orchestrator | 2025-04-01 20:05:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:20.660372 | orchestrator | 2025-04-01 20:05:20 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:20.664333 | orchestrator | 2025-04-01 20:05:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:20.665319 | orchestrator | 2025-04-01 20:05:20 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:20.667295 | orchestrator | 2025-04-01 20:05:20 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:23.714738 | orchestrator | 2025-04-01 20:05:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:23.714941 | orchestrator | 2025-04-01 20:05:23 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:23.715298 | orchestrator | 2025-04-01 20:05:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:23.715943 | orchestrator | 2025-04-01 20:05:23 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:23.716748 | orchestrator | 2025-04-01 20:05:23 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:26.778504 | orchestrator | 2025-04-01 20:05:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:26.778666 | orchestrator | 2025-04-01 20:05:26 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:26.779296 | orchestrator | 2025-04-01 20:05:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:26.781514 | orchestrator | 2025-04-01 20:05:26 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:26.783663 | orchestrator | 2025-04-01 20:05:26 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:29.824257 | orchestrator | 2025-04-01 20:05:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:29.824417 | orchestrator | 2025-04-01 20:05:29 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:29.827217 | orchestrator | 2025-04-01 20:05:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:29.828205 | orchestrator | 2025-04-01 20:05:29 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:29.831900 | orchestrator | 2025-04-01 20:05:29 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:32.874135 | orchestrator | 2025-04-01 20:05:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:32.874293 | orchestrator | 2025-04-01 20:05:32 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:32.874533 | orchestrator | 2025-04-01 20:05:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:32.874580 | orchestrator | 2025-04-01 20:05:32 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:32.877690 | orchestrator | 2025-04-01 20:05:32 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:35.931301 | orchestrator | 2025-04-01 20:05:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:35.931441 | orchestrator | 2025-04-01 20:05:35 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:35.932412 | orchestrator | 2025-04-01 20:05:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:35.932473 | orchestrator | 2025-04-01 20:05:35 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:35.934574 | orchestrator | 2025-04-01 20:05:35 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:38.982648 | orchestrator | 2025-04-01 20:05:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:38.982903 | orchestrator | 2025-04-01 20:05:38 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:38.983703 | orchestrator | 2025-04-01 20:05:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:38.983737 | orchestrator | 2025-04-01 20:05:38 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:38.984621 | orchestrator | 2025-04-01 20:05:38 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:42.038586 | orchestrator | 2025-04-01 20:05:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:42.038709 | orchestrator | 2025-04-01 20:05:42 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state STARTED 2025-04-01 20:05:42.041289 | orchestrator | 2025-04-01 20:05:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:42.043042 | orchestrator | 2025-04-01 20:05:42 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:42.049149 | orchestrator | 2025-04-01 20:05:42 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:45.098637 | orchestrator | 2025-04-01 20:05:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:45.099446 | orchestrator | 2025-04-01 20:05:45.099485 | orchestrator | None 2025-04-01 20:05:45.099501 | orchestrator | 2025-04-01 20:05:45.099516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:05:45.099530 | orchestrator | 2025-04-01 20:05:45.099545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:05:45.099559 | orchestrator | Tuesday 01 April 2025 20:02:53 +0000 (0:00:00.261) 0:00:00.261 ********* 2025-04-01 20:05:45.099574 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:05:45.099590 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:05:45.099621 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:05:45.099637 | orchestrator | 2025-04-01 20:05:45.099651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:05:45.099666 | orchestrator | Tuesday 01 April 2025 20:02:53 +0000 (0:00:00.528) 0:00:00.789 ********* 2025-04-01 20:05:45.099680 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-04-01 20:05:45.099694 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-04-01 20:05:45.099708 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-04-01 20:05:45.099722 | orchestrator | 2025-04-01 20:05:45.099736 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-04-01 20:05:45.099751 | orchestrator | 2025-04-01 20:05:45.099765 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-04-01 20:05:45.099779 | orchestrator | Tuesday 01 April 2025 20:02:54 +0000 (0:00:00.581) 0:00:01.370 ********* 2025-04-01 20:05:45.099819 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:05:45.099833 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:05:45.099847 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:05:45.099861 | orchestrator | 2025-04-01 20:05:45.099875 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:05:45.099890 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:05:45.099924 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:05:45.099965 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:05:45.099980 | orchestrator | 2025-04-01 20:05:45.099994 | orchestrator | 2025-04-01 20:05:45.100009 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:05:45.100023 | orchestrator | Tuesday 01 April 2025 20:04:18 +0000 (0:01:23.997) 0:01:25.367 ********* 2025-04-01 20:05:45.100037 | orchestrator | =============================================================================== 2025-04-01 20:05:45.100056 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 84.00s 2025-04-01 20:05:45.100070 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-04-01 20:05:45.100084 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-04-01 20:05:45.100098 | orchestrator | 2025-04-01 20:05:45.100112 | orchestrator | 2025-04-01 20:05:45.100126 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:05:45.100140 | orchestrator | 2025-04-01 20:05:45.100154 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:05:45.100168 | orchestrator | Tuesday 01 April 2025 20:03:51 +0000 (0:00:00.368) 0:00:00.368 ********* 2025-04-01 20:05:45.100182 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:05:45.100196 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:05:45.100210 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:05:45.100223 | orchestrator | 2025-04-01 20:05:45.100238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:05:45.100252 | orchestrator | Tuesday 01 April 2025 20:03:51 +0000 (0:00:00.453) 0:00:00.822 ********* 2025-04-01 20:05:45.100266 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-04-01 20:05:45.100280 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-04-01 20:05:45.100293 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-04-01 20:05:45.100307 | orchestrator | 2025-04-01 20:05:45.100322 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-04-01 20:05:45.100336 | orchestrator | 2025-04-01 20:05:45.100349 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-01 20:05:45.100363 | orchestrator | Tuesday 01 April 2025 20:03:52 +0000 (0:00:00.383) 0:00:01.206 ********* 2025-04-01 20:05:45.100377 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:05:45.100392 | orchestrator | 2025-04-01 20:05:45.100406 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-04-01 20:05:45.100420 | orchestrator | Tuesday 01 April 2025 20:03:53 +0000 (0:00:00.971) 0:00:02.178 ********* 2025-04-01 20:05:45.100436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.100504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.100534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.100549 | orchestrator | 2025-04-01 20:05:45.100563 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-04-01 20:05:45.100577 | orchestrator | Tuesday 01 April 2025 20:03:54 +0000 (0:00:01.203) 0:00:03.381 ********* 2025-04-01 20:05:45.100592 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-04-01 20:05:45.100607 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-04-01 20:05:45.100621 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 20:05:45.100636 | orchestrator | 2025-04-01 20:05:45.100650 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-01 20:05:45.100664 | orchestrator | Tuesday 01 April 2025 20:03:55 +0000 (0:00:00.583) 0:00:03.965 ********* 2025-04-01 20:05:45.100678 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:05:45.100692 | orchestrator | 2025-04-01 20:05:45.100706 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-04-01 20:05:45.100720 | orchestrator | Tuesday 01 April 2025 20:03:55 +0000 (0:00:00.719) 0:00:04.684 ********* 2025-04-01 20:05:45.100736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.100751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.100766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.100809 | orchestrator | 2025-04-01 20:05:45.100858 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-04-01 20:05:45.100875 | orchestrator | Tuesday 01 April 2025 20:03:57 +0000 (0:00:01.465) 0:00:06.150 ********* 2025-04-01 20:05:45.100890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 20:05:45.100905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 20:05:45.100920 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.100938 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.100954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 20:05:45.100969 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.100983 | orchestrator | 2025-04-01 20:05:45.100997 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-04-01 20:05:45.101011 | orchestrator | Tuesday 01 April 2025 20:03:57 +0000 (0:00:00.626) 0:00:06.777 ********* 2025-04-01 20:05:45.101026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 20:05:45.101040 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.101055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 20:05:45.101077 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.101123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-01 20:05:45.101140 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.101154 | orchestrator | 2025-04-01 20:05:45.101169 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-04-01 20:05:45.101183 | orchestrator | Tuesday 01 April 2025 20:03:58 +0000 (0:00:00.746) 0:00:07.523 ********* 2025-04-01 20:05:45.101197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.101212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.101226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.101241 | orchestrator | 2025-04-01 20:05:45.101255 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-04-01 20:05:45.101284 | orchestrator | Tuesday 01 April 2025 20:04:00 +0000 (0:00:01.706) 0:00:09.230 ********* 2025-04-01 20:05:45.101310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.101365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.101383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.101397 | orchestrator | 2025-04-01 20:05:45.101411 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-04-01 20:05:45.101425 | orchestrator | Tuesday 01 April 2025 20:04:02 +0000 (0:00:02.077) 0:00:11.307 ********* 2025-04-01 20:05:45.101440 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.101454 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.101468 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.101482 | orchestrator | 2025-04-01 20:05:45.101496 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-04-01 20:05:45.101510 | orchestrator | Tuesday 01 April 2025 20:04:02 +0000 (0:00:00.454) 0:00:11.761 ********* 2025-04-01 20:05:45.101524 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-01 20:05:45.101538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-01 20:05:45.101552 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-01 20:05:45.101566 | orchestrator | 2025-04-01 20:05:45.101585 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-04-01 20:05:45.101600 | orchestrator | Tuesday 01 April 2025 20:04:04 +0000 (0:00:01.417) 0:00:13.179 ********* 2025-04-01 20:05:45.101614 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-01 20:05:45.101629 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-01 20:05:45.101643 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-01 20:05:45.101657 | orchestrator | 2025-04-01 20:05:45.101671 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-04-01 20:05:45.101685 | orchestrator | Tuesday 01 April 2025 20:04:05 +0000 (0:00:01.530) 0:00:14.709 ********* 2025-04-01 20:05:45.101698 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 20:05:45.101713 | orchestrator | 2025-04-01 20:05:45.101741 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-04-01 20:05:45.101756 | orchestrator | Tuesday 01 April 2025 20:04:06 +0000 (0:00:00.498) 0:00:15.207 ********* 2025-04-01 20:05:45.101770 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-04-01 20:05:45.101841 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-04-01 20:05:45.101857 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:05:45.101871 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:05:45.101886 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:05:45.101900 | orchestrator | 2025-04-01 20:05:45.101914 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-04-01 20:05:45.101928 | orchestrator | Tuesday 01 April 2025 20:04:07 +0000 (0:00:00.942) 0:00:16.150 ********* 2025-04-01 20:05:45.101942 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.101957 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.101971 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.101985 | orchestrator | 2025-04-01 20:05:45.101999 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-04-01 20:05:45.102013 | orchestrator | Tuesday 01 April 2025 20:04:07 +0000 (0:00:00.492) 0:00:16.643 ********* 2025-04-01 20:05:45.102077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1063238, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.860001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1063238, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.860001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1063238, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.860001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1063227, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8550012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1063227, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8550012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1063227, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8550012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1063223, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.853001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1063223, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.853001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1063223, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.853001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1063234, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.857001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1063234, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.857001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1063234, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.857001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1063219, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.848001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1063219, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.848001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1063219, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.848001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1063224, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.854001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1063224, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.854001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1063224, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.854001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1063233, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.857001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1063233, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.857001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'i2025-04-01 20:05:45 | INFO  | Task ba49229b-006a-47eb-a1ec-fcdd841b1cd7 is in state SUCCESS 2025-04-01 20:05:45.102578 | orchestrator | node': 1063233, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.857001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1063218, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.847001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1063218, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.847001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1063218, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.847001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1063205, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.842001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1063205, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.842001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1063205, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.842001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1063220, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.849001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1063220, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.849001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1063220, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.849001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1063210, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.845001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1063210, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.845001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1063210, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.845001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1063229, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8560011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1063229, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8560011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1063229, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8560011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1063221, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.851001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1063221, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.851001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1063221, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.851001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1063237, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8580012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.102980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1063237, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8580012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1063237, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8580012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1063215, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.847001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1063215, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.847001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1063215, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.847001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1063225, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8550012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1063225, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8550012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1063225, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8550012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1063206, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.843001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1063206, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.843001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1063206, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.843001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1063212, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.846001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1063212, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.846001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1063212, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.846001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1063222, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.852001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1063222, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.852001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1063222, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.852001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1063267, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8780015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1063267, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8780015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1063267, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8780015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1063261, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8710015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1063261, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8710015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1063261, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8710015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1063294, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8830016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1063294, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8830016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1063294, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8830016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1063242, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.860001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1063242, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.860001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1063242, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.860001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1063299, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8850017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1063299, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8850017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1063299, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8850017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1063282, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8780015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1063282, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8780015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1063282, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8780015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1063285, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8790016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1063285, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8790016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1063285, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8790016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1063244, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8610013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1063244, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8610013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1063244, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8610013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1063264, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8720014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1063264, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8720014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1063264, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8720014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1063309, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8870018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1063309, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8870018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1063309, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8870018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1063287, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8810015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1063287, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8810015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1063287, 'dev': 186, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1743534253.8810015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1063247, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8630013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1063247, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8630013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1063247, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8630013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1063246, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8620012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1063246, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8620012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1063246, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8620012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1063252, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8660014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1063252, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8660014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1063252, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8660014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1063254, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8700013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.103991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1063254, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8700013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.104019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1063254, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8700013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.104038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1063314, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8880017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.104050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1063314, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8880017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.104061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1063314, 'dev': 186, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1743534253.8880017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-01 20:05:45.104073 | orchestrator | 2025-04-01 20:05:45.104084 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-04-01 20:05:45.104095 | orchestrator | Tuesday 01 April 2025 20:04:41 +0000 (0:00:34.050) 0:00:50.693 ********* 2025-04-01 20:05:45.104106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.104117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.104134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-01 20:05:45.104146 | orchestrator | 2025-04-01 20:05:45.104161 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-04-01 20:05:45.104172 | orchestrator | Tuesday 01 April 2025 20:04:42 +0000 (0:00:01.112) 0:00:51.805 ********* 2025-04-01 20:05:45.104183 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:05:45.104193 | orchestrator | 2025-04-01 20:05:45.104204 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-04-01 20:05:45.104214 | orchestrator | Tuesday 01 April 2025 20:04:46 +0000 (0:00:03.220) 0:00:55.026 ********* 2025-04-01 20:05:45.104225 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:05:45.104235 | orchestrator | 2025-04-01 20:05:45.104245 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-01 20:05:45.104256 | orchestrator | Tuesday 01 April 2025 20:04:48 +0000 (0:00:02.208) 0:00:57.234 ********* 2025-04-01 20:05:45.104266 | orchestrator | 2025-04-01 20:05:45.104277 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-01 20:05:45.104287 | orchestrator | Tuesday 01 April 2025 20:04:48 +0000 (0:00:00.069) 0:00:57.303 ********* 2025-04-01 20:05:45.104298 | orchestrator | 2025-04-01 20:05:45.104308 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-01 20:05:45.104318 | orchestrator | Tuesday 01 April 2025 20:04:48 +0000 (0:00:00.064) 0:00:57.368 ********* 2025-04-01 20:05:45.104329 | orchestrator | 2025-04-01 20:05:45.104339 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-04-01 20:05:45.104350 | orchestrator | Tuesday 01 April 2025 20:04:48 +0000 (0:00:00.229) 0:00:57.598 ********* 2025-04-01 20:05:45.104360 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.104371 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.104382 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:05:45.104392 | orchestrator | 2025-04-01 20:05:45.104402 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-04-01 20:05:45.104413 | orchestrator | Tuesday 01 April 2025 20:04:55 +0000 (0:00:07.143) 0:01:04.741 ********* 2025-04-01 20:05:45.104514 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.104527 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.104538 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-04-01 20:05:45.104549 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-04-01 20:05:45.104559 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:05:45.104570 | orchestrator | 2025-04-01 20:05:45.104580 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-04-01 20:05:45.104591 | orchestrator | Tuesday 01 April 2025 20:05:22 +0000 (0:00:26.374) 0:01:31.116 ********* 2025-04-01 20:05:45.104607 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.104618 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:05:45.104629 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:05:45.104639 | orchestrator | 2025-04-01 20:05:45.104650 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-04-01 20:05:45.104660 | orchestrator | Tuesday 01 April 2025 20:05:37 +0000 (0:00:15.576) 0:01:46.692 ********* 2025-04-01 20:05:45.104670 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:05:45.104681 | orchestrator | 2025-04-01 20:05:45.104691 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-04-01 20:05:45.104706 | orchestrator | Tuesday 01 April 2025 20:05:39 +0000 (0:00:02.018) 0:01:48.711 ********* 2025-04-01 20:05:45.104717 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.104728 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:05:45.104738 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:05:45.104749 | orchestrator | 2025-04-01 20:05:45.104759 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-04-01 20:05:45.104770 | orchestrator | Tuesday 01 April 2025 20:05:40 +0000 (0:00:00.493) 0:01:49.204 ********* 2025-04-01 20:05:45.104795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-04-01 20:05:45.104808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-04-01 20:05:45.104820 | orchestrator | 2025-04-01 20:05:45.104831 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-04-01 20:05:45.104841 | orchestrator | Tuesday 01 April 2025 20:05:42 +0000 (0:00:02.175) 0:01:51.380 ********* 2025-04-01 20:05:45.104851 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:05:45.104862 | orchestrator | 2025-04-01 20:05:45.104872 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:05:45.104882 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:05:45.104894 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:05:45.104904 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:05:45.104914 | orchestrator | 2025-04-01 20:05:45.104925 | orchestrator | 2025-04-01 20:05:45.104935 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:05:45.104951 | orchestrator | Tuesday 01 April 2025 20:05:42 +0000 (0:00:00.422) 0:01:51.803 ********* 2025-04-01 20:05:48.158421 | orchestrator | =============================================================================== 2025-04-01 20:05:48.158578 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.05s 2025-04-01 20:05:48.158599 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.37s 2025-04-01 20:05:48.158615 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 15.58s 2025-04-01 20:05:48.158630 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.14s 2025-04-01 20:05:48.158645 | orchestrator | grafana : Creating grafana database ------------------------------------- 3.22s 2025-04-01 20:05:48.158660 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.21s 2025-04-01 20:05:48.158675 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.18s 2025-04-01 20:05:48.158723 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.08s 2025-04-01 20:05:48.158738 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.02s 2025-04-01 20:05:48.158752 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.71s 2025-04-01 20:05:48.158766 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.53s 2025-04-01 20:05:48.158807 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.47s 2025-04-01 20:05:48.158823 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.42s 2025-04-01 20:05:48.158838 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.20s 2025-04-01 20:05:48.158852 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.11s 2025-04-01 20:05:48.158866 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.97s 2025-04-01 20:05:48.158880 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.94s 2025-04-01 20:05:48.158895 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.75s 2025-04-01 20:05:48.158909 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.72s 2025-04-01 20:05:48.158923 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.63s 2025-04-01 20:05:48.158939 | orchestrator | 2025-04-01 20:05:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:48.159034 | orchestrator | 2025-04-01 20:05:45 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:48.159050 | orchestrator | 2025-04-01 20:05:45 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:48.159065 | orchestrator | 2025-04-01 20:05:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:48.159100 | orchestrator | 2025-04-01 20:05:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:48.159322 | orchestrator | 2025-04-01 20:05:48 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:48.159349 | orchestrator | 2025-04-01 20:05:48 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:48.159369 | orchestrator | 2025-04-01 20:05:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:51.203636 | orchestrator | 2025-04-01 20:05:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:51.203876 | orchestrator | 2025-04-01 20:05:51 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:51.204976 | orchestrator | 2025-04-01 20:05:51 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:54.261423 | orchestrator | 2025-04-01 20:05:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:54.261556 | orchestrator | 2025-04-01 20:05:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:54.263129 | orchestrator | 2025-04-01 20:05:54 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:54.264626 | orchestrator | 2025-04-01 20:05:54 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:05:57.320030 | orchestrator | 2025-04-01 20:05:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:05:57.320177 | orchestrator | 2025-04-01 20:05:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:05:57.321022 | orchestrator | 2025-04-01 20:05:57 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:05:57.323566 | orchestrator | 2025-04-01 20:05:57 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:00.361922 | orchestrator | 2025-04-01 20:05:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:00.362117 | orchestrator | 2025-04-01 20:06:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:00.362567 | orchestrator | 2025-04-01 20:06:00 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:00.364746 | orchestrator | 2025-04-01 20:06:00 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:03.403639 | orchestrator | 2025-04-01 20:06:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:03.403833 | orchestrator | 2025-04-01 20:06:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:03.404847 | orchestrator | 2025-04-01 20:06:03 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:03.407014 | orchestrator | 2025-04-01 20:06:03 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:06.449030 | orchestrator | 2025-04-01 20:06:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:06.449163 | orchestrator | 2025-04-01 20:06:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:06.452365 | orchestrator | 2025-04-01 20:06:06 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:06.452398 | orchestrator | 2025-04-01 20:06:06 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:09.507706 | orchestrator | 2025-04-01 20:06:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:09.507894 | orchestrator | 2025-04-01 20:06:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:09.508081 | orchestrator | 2025-04-01 20:06:09 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:09.509065 | orchestrator | 2025-04-01 20:06:09 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:12.565541 | orchestrator | 2025-04-01 20:06:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:12.565685 | orchestrator | 2025-04-01 20:06:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:12.566739 | orchestrator | 2025-04-01 20:06:12 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:12.568206 | orchestrator | 2025-04-01 20:06:12 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:12.568942 | orchestrator | 2025-04-01 20:06:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:15.611328 | orchestrator | 2025-04-01 20:06:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:15.613974 | orchestrator | 2025-04-01 20:06:15 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:15.615121 | orchestrator | 2025-04-01 20:06:15 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:18.670889 | orchestrator | 2025-04-01 20:06:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:18.671031 | orchestrator | 2025-04-01 20:06:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:18.671608 | orchestrator | 2025-04-01 20:06:18 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:18.672853 | orchestrator | 2025-04-01 20:06:18 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:21.719193 | orchestrator | 2025-04-01 20:06:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:21.719326 | orchestrator | 2025-04-01 20:06:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:21.719912 | orchestrator | 2025-04-01 20:06:21 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:21.721748 | orchestrator | 2025-04-01 20:06:21 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:24.777620 | orchestrator | 2025-04-01 20:06:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:24.777772 | orchestrator | 2025-04-01 20:06:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:24.779537 | orchestrator | 2025-04-01 20:06:24 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:24.782202 | orchestrator | 2025-04-01 20:06:24 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:27.837595 | orchestrator | 2025-04-01 20:06:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:27.837755 | orchestrator | 2025-04-01 20:06:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:27.839905 | orchestrator | 2025-04-01 20:06:27 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:27.842884 | orchestrator | 2025-04-01 20:06:27 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:30.893165 | orchestrator | 2025-04-01 20:06:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:30.893362 | orchestrator | 2025-04-01 20:06:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:30.894399 | orchestrator | 2025-04-01 20:06:30 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:30.896195 | orchestrator | 2025-04-01 20:06:30 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:30.896483 | orchestrator | 2025-04-01 20:06:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:33.943472 | orchestrator | 2025-04-01 20:06:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:33.943963 | orchestrator | 2025-04-01 20:06:33 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:33.945009 | orchestrator | 2025-04-01 20:06:33 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:36.993015 | orchestrator | 2025-04-01 20:06:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:36.993156 | orchestrator | 2025-04-01 20:06:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:36.993481 | orchestrator | 2025-04-01 20:06:36 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:36.999140 | orchestrator | 2025-04-01 20:06:36 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:40.048786 | orchestrator | 2025-04-01 20:06:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:40.048956 | orchestrator | 2025-04-01 20:06:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:40.051363 | orchestrator | 2025-04-01 20:06:40 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:40.052653 | orchestrator | 2025-04-01 20:06:40 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:40.052864 | orchestrator | 2025-04-01 20:06:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:43.096641 | orchestrator | 2025-04-01 20:06:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:43.098002 | orchestrator | 2025-04-01 20:06:43 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:43.099464 | orchestrator | 2025-04-01 20:06:43 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:46.154455 | orchestrator | 2025-04-01 20:06:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:46.154577 | orchestrator | 2025-04-01 20:06:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:46.156055 | orchestrator | 2025-04-01 20:06:46 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:46.156087 | orchestrator | 2025-04-01 20:06:46 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:49.200200 | orchestrator | 2025-04-01 20:06:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:49.200334 | orchestrator | 2025-04-01 20:06:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:49.201046 | orchestrator | 2025-04-01 20:06:49 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:49.202368 | orchestrator | 2025-04-01 20:06:49 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:52.243839 | orchestrator | 2025-04-01 20:06:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:52.243967 | orchestrator | 2025-04-01 20:06:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:52.244208 | orchestrator | 2025-04-01 20:06:52 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:52.252287 | orchestrator | 2025-04-01 20:06:52 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:55.299709 | orchestrator | 2025-04-01 20:06:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:55.299881 | orchestrator | 2025-04-01 20:06:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:55.303044 | orchestrator | 2025-04-01 20:06:55 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:55.306844 | orchestrator | 2025-04-01 20:06:55 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:58.339612 | orchestrator | 2025-04-01 20:06:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:06:58.339739 | orchestrator | 2025-04-01 20:06:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:06:58.342137 | orchestrator | 2025-04-01 20:06:58 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:06:58.346149 | orchestrator | 2025-04-01 20:06:58 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:06:58.346670 | orchestrator | 2025-04-01 20:06:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:01.415477 | orchestrator | 2025-04-01 20:07:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:01.417453 | orchestrator | 2025-04-01 20:07:01 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:01.418481 | orchestrator | 2025-04-01 20:07:01 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:04.474938 | orchestrator | 2025-04-01 20:07:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:04.475069 | orchestrator | 2025-04-01 20:07:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:04.476488 | orchestrator | 2025-04-01 20:07:04 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:04.479507 | orchestrator | 2025-04-01 20:07:04 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:04.479915 | orchestrator | 2025-04-01 20:07:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:07.530717 | orchestrator | 2025-04-01 20:07:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:07.532113 | orchestrator | 2025-04-01 20:07:07 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:07.534114 | orchestrator | 2025-04-01 20:07:07 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:07.534434 | orchestrator | 2025-04-01 20:07:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:10.594058 | orchestrator | 2025-04-01 20:07:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:10.594564 | orchestrator | 2025-04-01 20:07:10 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:10.596029 | orchestrator | 2025-04-01 20:07:10 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:13.657734 | orchestrator | 2025-04-01 20:07:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:13.657906 | orchestrator | 2025-04-01 20:07:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:13.659545 | orchestrator | 2025-04-01 20:07:13 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:13.662269 | orchestrator | 2025-04-01 20:07:13 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:16.704050 | orchestrator | 2025-04-01 20:07:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:16.704164 | orchestrator | 2025-04-01 20:07:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:16.704326 | orchestrator | 2025-04-01 20:07:16 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:16.705106 | orchestrator | 2025-04-01 20:07:16 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:19.751152 | orchestrator | 2025-04-01 20:07:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:19.751286 | orchestrator | 2025-04-01 20:07:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:19.752269 | orchestrator | 2025-04-01 20:07:19 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:19.752318 | orchestrator | 2025-04-01 20:07:19 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:22.796710 | orchestrator | 2025-04-01 20:07:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:22.796931 | orchestrator | 2025-04-01 20:07:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:22.798902 | orchestrator | 2025-04-01 20:07:22 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:22.798956 | orchestrator | 2025-04-01 20:07:22 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:25.856359 | orchestrator | 2025-04-01 20:07:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:25.856639 | orchestrator | 2025-04-01 20:07:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:25.858259 | orchestrator | 2025-04-01 20:07:25 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:25.858299 | orchestrator | 2025-04-01 20:07:25 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:28.909886 | orchestrator | 2025-04-01 20:07:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:28.910090 | orchestrator | 2025-04-01 20:07:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:28.910731 | orchestrator | 2025-04-01 20:07:28 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:28.911924 | orchestrator | 2025-04-01 20:07:28 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:31.970460 | orchestrator | 2025-04-01 20:07:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:31.970597 | orchestrator | 2025-04-01 20:07:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:31.972114 | orchestrator | 2025-04-01 20:07:31 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:31.972149 | orchestrator | 2025-04-01 20:07:31 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:31.972845 | orchestrator | 2025-04-01 20:07:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:35.027594 | orchestrator | 2025-04-01 20:07:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:35.031722 | orchestrator | 2025-04-01 20:07:35 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:35.034936 | orchestrator | 2025-04-01 20:07:35 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:38.085391 | orchestrator | 2025-04-01 20:07:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:38.085519 | orchestrator | 2025-04-01 20:07:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:38.086864 | orchestrator | 2025-04-01 20:07:38 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:41.122267 | orchestrator | 2025-04-01 20:07:38 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:41.122395 | orchestrator | 2025-04-01 20:07:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:41.122435 | orchestrator | 2025-04-01 20:07:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:41.122650 | orchestrator | 2025-04-01 20:07:41 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:41.123572 | orchestrator | 2025-04-01 20:07:41 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:44.174912 | orchestrator | 2025-04-01 20:07:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:44.175045 | orchestrator | 2025-04-01 20:07:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:44.177036 | orchestrator | 2025-04-01 20:07:44 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:44.178561 | orchestrator | 2025-04-01 20:07:44 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:44.178669 | orchestrator | 2025-04-01 20:07:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:47.226798 | orchestrator | 2025-04-01 20:07:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:47.227971 | orchestrator | 2025-04-01 20:07:47 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:47.230097 | orchestrator | 2025-04-01 20:07:47 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:50.282332 | orchestrator | 2025-04-01 20:07:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:50.282469 | orchestrator | 2025-04-01 20:07:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:50.282594 | orchestrator | 2025-04-01 20:07:50 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:50.287081 | orchestrator | 2025-04-01 20:07:50 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:53.337398 | orchestrator | 2025-04-01 20:07:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:53.337532 | orchestrator | 2025-04-01 20:07:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:53.339752 | orchestrator | 2025-04-01 20:07:53 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:53.341283 | orchestrator | 2025-04-01 20:07:53 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:56.394933 | orchestrator | 2025-04-01 20:07:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:56.395058 | orchestrator | 2025-04-01 20:07:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:56.395851 | orchestrator | 2025-04-01 20:07:56 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:56.397539 | orchestrator | 2025-04-01 20:07:56 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:07:56.397567 | orchestrator | 2025-04-01 20:07:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:07:59.449364 | orchestrator | 2025-04-01 20:07:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:07:59.450125 | orchestrator | 2025-04-01 20:07:59 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:07:59.451990 | orchestrator | 2025-04-01 20:07:59 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:02.498563 | orchestrator | 2025-04-01 20:07:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:02.498729 | orchestrator | 2025-04-01 20:08:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:02.499048 | orchestrator | 2025-04-01 20:08:02 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:02.499860 | orchestrator | 2025-04-01 20:08:02 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:02.501009 | orchestrator | 2025-04-01 20:08:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:05.557563 | orchestrator | 2025-04-01 20:08:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:05.558158 | orchestrator | 2025-04-01 20:08:05 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:05.559241 | orchestrator | 2025-04-01 20:08:05 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:08.616447 | orchestrator | 2025-04-01 20:08:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:08.616596 | orchestrator | 2025-04-01 20:08:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:11.664934 | orchestrator | 2025-04-01 20:08:08 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:11.665112 | orchestrator | 2025-04-01 20:08:08 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:11.665177 | orchestrator | 2025-04-01 20:08:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:11.665214 | orchestrator | 2025-04-01 20:08:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:11.667037 | orchestrator | 2025-04-01 20:08:11 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:11.668899 | orchestrator | 2025-04-01 20:08:11 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:14.717697 | orchestrator | 2025-04-01 20:08:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:14.717927 | orchestrator | 2025-04-01 20:08:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:14.718502 | orchestrator | 2025-04-01 20:08:14 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:14.718540 | orchestrator | 2025-04-01 20:08:14 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:17.767906 | orchestrator | 2025-04-01 20:08:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:17.768039 | orchestrator | 2025-04-01 20:08:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:17.769101 | orchestrator | 2025-04-01 20:08:17 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:17.773770 | orchestrator | 2025-04-01 20:08:17 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:17.774202 | orchestrator | 2025-04-01 20:08:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:20.820359 | orchestrator | 2025-04-01 20:08:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:20.823468 | orchestrator | 2025-04-01 20:08:20 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:20.830421 | orchestrator | 2025-04-01 20:08:20 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:23.878455 | orchestrator | 2025-04-01 20:08:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:23.878618 | orchestrator | 2025-04-01 20:08:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:23.880288 | orchestrator | 2025-04-01 20:08:23 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:23.882354 | orchestrator | 2025-04-01 20:08:23 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:26.927764 | orchestrator | 2025-04-01 20:08:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:26.927984 | orchestrator | 2025-04-01 20:08:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:26.929687 | orchestrator | 2025-04-01 20:08:26 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:26.931490 | orchestrator | 2025-04-01 20:08:26 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:29.981512 | orchestrator | 2025-04-01 20:08:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:29.981678 | orchestrator | 2025-04-01 20:08:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:29.983042 | orchestrator | 2025-04-01 20:08:29 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:29.984914 | orchestrator | 2025-04-01 20:08:29 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:33.044191 | orchestrator | 2025-04-01 20:08:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:33.044360 | orchestrator | 2025-04-01 20:08:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:33.045159 | orchestrator | 2025-04-01 20:08:33 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:33.046719 | orchestrator | 2025-04-01 20:08:33 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:36.100527 | orchestrator | 2025-04-01 20:08:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:36.100681 | orchestrator | 2025-04-01 20:08:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:36.101161 | orchestrator | 2025-04-01 20:08:36 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:36.102915 | orchestrator | 2025-04-01 20:08:36 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:39.148460 | orchestrator | 2025-04-01 20:08:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:39.148641 | orchestrator | 2025-04-01 20:08:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:39.150486 | orchestrator | 2025-04-01 20:08:39 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:39.151974 | orchestrator | 2025-04-01 20:08:39 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:42.199016 | orchestrator | 2025-04-01 20:08:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:42.199192 | orchestrator | 2025-04-01 20:08:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:42.199930 | orchestrator | 2025-04-01 20:08:42 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:42.201111 | orchestrator | 2025-04-01 20:08:42 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:45.258223 | orchestrator | 2025-04-01 20:08:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:45.258396 | orchestrator | 2025-04-01 20:08:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:48.313251 | orchestrator | 2025-04-01 20:08:45 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:48.313390 | orchestrator | 2025-04-01 20:08:45 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:48.313408 | orchestrator | 2025-04-01 20:08:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:48.313444 | orchestrator | 2025-04-01 20:08:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:48.318175 | orchestrator | 2025-04-01 20:08:48 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:48.322545 | orchestrator | 2025-04-01 20:08:48 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:51.378363 | orchestrator | 2025-04-01 20:08:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:51.378504 | orchestrator | 2025-04-01 20:08:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:51.380549 | orchestrator | 2025-04-01 20:08:51 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state STARTED 2025-04-01 20:08:51.382261 | orchestrator | 2025-04-01 20:08:51 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:51.382699 | orchestrator | 2025-04-01 20:08:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:54.434129 | orchestrator | 2025-04-01 20:08:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:54.436682 | orchestrator | 2025-04-01 20:08:54 | INFO  | Task 710053dd-6ddd-4f6e-b72d-9a2e26d84ea4 is in state SUCCESS 2025-04-01 20:08:54.439680 | orchestrator | 2025-04-01 20:08:54.439722 | orchestrator | 2025-04-01 20:08:54.439739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:08:54.439754 | orchestrator | 2025-04-01 20:08:54.439770 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-04-01 20:08:54.439785 | orchestrator | Tuesday 01 April 2025 19:59:54 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-04-01 20:08:54.439800 | orchestrator | changed: [testbed-manager] 2025-04-01 20:08:54.439847 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.439862 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.439876 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.440003 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.440019 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.440033 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.440047 | orchestrator | 2025-04-01 20:08:54.440105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:08:54.440121 | orchestrator | Tuesday 01 April 2025 19:59:55 +0000 (0:00:01.423) 0:00:01.681 ********* 2025-04-01 20:08:54.440136 | orchestrator | changed: [testbed-manager] 2025-04-01 20:08:54.440150 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.440164 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.440179 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.440193 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.440207 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.440221 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.440235 | orchestrator | 2025-04-01 20:08:54.440249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:08:54.440263 | orchestrator | Tuesday 01 April 2025 19:59:57 +0000 (0:00:01.644) 0:00:03.326 ********* 2025-04-01 20:08:54.440279 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-04-01 20:08:54.440294 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-04-01 20:08:54.440310 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-04-01 20:08:54.440326 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-04-01 20:08:54.440343 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-04-01 20:08:54.441089 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-04-01 20:08:54.441113 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-04-01 20:08:54.441187 | orchestrator | 2025-04-01 20:08:54.441436 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-04-01 20:08:54.441460 | orchestrator | 2025-04-01 20:08:54.441475 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-01 20:08:54.441489 | orchestrator | Tuesday 01 April 2025 19:59:59 +0000 (0:00:01.781) 0:00:05.108 ********* 2025-04-01 20:08:54.441504 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.441518 | orchestrator | 2025-04-01 20:08:54.441532 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-04-01 20:08:54.441546 | orchestrator | Tuesday 01 April 2025 20:00:00 +0000 (0:00:01.201) 0:00:06.310 ********* 2025-04-01 20:08:54.441561 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-04-01 20:08:54.442190 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-04-01 20:08:54.442295 | orchestrator | 2025-04-01 20:08:54.442312 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-04-01 20:08:54.442327 | orchestrator | Tuesday 01 April 2025 20:00:06 +0000 (0:00:05.421) 0:00:11.732 ********* 2025-04-01 20:08:54.442340 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 20:08:54.442380 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-01 20:08:54.442394 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.442408 | orchestrator | 2025-04-01 20:08:54.442420 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-01 20:08:54.442433 | orchestrator | Tuesday 01 April 2025 20:00:11 +0000 (0:00:05.442) 0:00:17.174 ********* 2025-04-01 20:08:54.442446 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.442458 | orchestrator | 2025-04-01 20:08:54.442471 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-04-01 20:08:54.442483 | orchestrator | Tuesday 01 April 2025 20:00:12 +0000 (0:00:01.026) 0:00:18.200 ********* 2025-04-01 20:08:54.442496 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.442508 | orchestrator | 2025-04-01 20:08:54.442521 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-04-01 20:08:54.442533 | orchestrator | Tuesday 01 April 2025 20:00:14 +0000 (0:00:01.846) 0:00:20.047 ********* 2025-04-01 20:08:54.442546 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.442558 | orchestrator | 2025-04-01 20:08:54.442571 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-01 20:08:54.442583 | orchestrator | Tuesday 01 April 2025 20:00:17 +0000 (0:00:03.321) 0:00:23.368 ********* 2025-04-01 20:08:54.442596 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.442620 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.442633 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.442646 | orchestrator | 2025-04-01 20:08:54.442658 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-01 20:08:54.442671 | orchestrator | Tuesday 01 April 2025 20:00:18 +0000 (0:00:00.529) 0:00:23.897 ********* 2025-04-01 20:08:54.442683 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.442697 | orchestrator | 2025-04-01 20:08:54.442709 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-04-01 20:08:54.442721 | orchestrator | Tuesday 01 April 2025 20:00:45 +0000 (0:00:26.941) 0:00:50.839 ********* 2025-04-01 20:08:54.442734 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.442746 | orchestrator | 2025-04-01 20:08:54.442759 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-01 20:08:54.442777 | orchestrator | Tuesday 01 April 2025 20:01:02 +0000 (0:00:16.957) 0:01:07.796 ********* 2025-04-01 20:08:54.442789 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.442802 | orchestrator | 2025-04-01 20:08:54.442814 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-01 20:08:54.442863 | orchestrator | Tuesday 01 April 2025 20:01:14 +0000 (0:00:12.479) 0:01:20.279 ********* 2025-04-01 20:08:54.442891 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.442905 | orchestrator | 2025-04-01 20:08:54.442918 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-04-01 20:08:54.442930 | orchestrator | Tuesday 01 April 2025 20:01:16 +0000 (0:00:02.125) 0:01:22.405 ********* 2025-04-01 20:08:54.442943 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.442956 | orchestrator | 2025-04-01 20:08:54.442968 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-01 20:08:54.442981 | orchestrator | Tuesday 01 April 2025 20:01:17 +0000 (0:00:00.985) 0:01:23.390 ********* 2025-04-01 20:08:54.442994 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.443006 | orchestrator | 2025-04-01 20:08:54.443024 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-01 20:08:54.443037 | orchestrator | Tuesday 01 April 2025 20:01:19 +0000 (0:00:01.640) 0:01:25.031 ********* 2025-04-01 20:08:54.443049 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.443062 | orchestrator | 2025-04-01 20:08:54.443074 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-01 20:08:54.443086 | orchestrator | Tuesday 01 April 2025 20:01:37 +0000 (0:00:18.408) 0:01:43.440 ********* 2025-04-01 20:08:54.443107 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.443119 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443132 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443144 | orchestrator | 2025-04-01 20:08:54.443156 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-04-01 20:08:54.443169 | orchestrator | 2025-04-01 20:08:54.443181 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-01 20:08:54.443194 | orchestrator | Tuesday 01 April 2025 20:01:39 +0000 (0:00:01.696) 0:01:45.136 ********* 2025-04-01 20:08:54.443206 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.443219 | orchestrator | 2025-04-01 20:08:54.443231 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-04-01 20:08:54.443244 | orchestrator | Tuesday 01 April 2025 20:01:43 +0000 (0:00:03.973) 0:01:49.110 ********* 2025-04-01 20:08:54.443256 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443269 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443282 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.443294 | orchestrator | 2025-04-01 20:08:54.443307 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-04-01 20:08:54.443319 | orchestrator | Tuesday 01 April 2025 20:01:47 +0000 (0:00:04.115) 0:01:53.225 ********* 2025-04-01 20:08:54.443331 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443344 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443356 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.443368 | orchestrator | 2025-04-01 20:08:54.443381 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-01 20:08:54.443393 | orchestrator | Tuesday 01 April 2025 20:01:50 +0000 (0:00:03.401) 0:01:56.626 ********* 2025-04-01 20:08:54.443406 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.443418 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443430 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443443 | orchestrator | 2025-04-01 20:08:54.443455 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-01 20:08:54.443467 | orchestrator | Tuesday 01 April 2025 20:01:52 +0000 (0:00:01.951) 0:01:58.578 ********* 2025-04-01 20:08:54.443480 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-01 20:08:54.443493 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443505 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-01 20:08:54.443518 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443530 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-01 20:08:54.443542 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-04-01 20:08:54.443555 | orchestrator | 2025-04-01 20:08:54.443567 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-01 20:08:54.443580 | orchestrator | Tuesday 01 April 2025 20:02:01 +0000 (0:00:09.148) 0:02:07.726 ********* 2025-04-01 20:08:54.443592 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.443609 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443622 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443634 | orchestrator | 2025-04-01 20:08:54.443647 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-01 20:08:54.443659 | orchestrator | Tuesday 01 April 2025 20:02:02 +0000 (0:00:00.886) 0:02:08.613 ********* 2025-04-01 20:08:54.443672 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-01 20:08:54.443684 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.443697 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-01 20:08:54.443709 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443722 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-01 20:08:54.443734 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443747 | orchestrator | 2025-04-01 20:08:54.443759 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-01 20:08:54.443772 | orchestrator | Tuesday 01 April 2025 20:02:04 +0000 (0:00:01.827) 0:02:10.440 ********* 2025-04-01 20:08:54.443791 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443804 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443834 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.443848 | orchestrator | 2025-04-01 20:08:54.443861 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-04-01 20:08:54.443873 | orchestrator | Tuesday 01 April 2025 20:02:05 +0000 (0:00:00.657) 0:02:11.098 ********* 2025-04-01 20:08:54.443886 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443899 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443912 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.443924 | orchestrator | 2025-04-01 20:08:54.443937 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-04-01 20:08:54.443949 | orchestrator | Tuesday 01 April 2025 20:02:06 +0000 (0:00:01.280) 0:02:12.379 ********* 2025-04-01 20:08:54.443962 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.443974 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.443993 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.444006 | orchestrator | 2025-04-01 20:08:54.444019 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-04-01 20:08:54.444031 | orchestrator | Tuesday 01 April 2025 20:02:09 +0000 (0:00:03.033) 0:02:15.412 ********* 2025-04-01 20:08:54.444044 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.444056 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.444068 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.444081 | orchestrator | 2025-04-01 20:08:54.444094 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-01 20:08:54.444106 | orchestrator | Tuesday 01 April 2025 20:02:33 +0000 (0:00:24.038) 0:02:39.451 ********* 2025-04-01 20:08:54.444119 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.444132 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.444145 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.444157 | orchestrator | 2025-04-01 20:08:54.444170 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-01 20:08:54.444182 | orchestrator | Tuesday 01 April 2025 20:02:46 +0000 (0:00:12.457) 0:02:51.908 ********* 2025-04-01 20:08:54.444195 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.444207 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.444220 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.444232 | orchestrator | 2025-04-01 20:08:54.444249 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-04-01 20:08:54.444262 | orchestrator | Tuesday 01 April 2025 20:02:47 +0000 (0:00:01.393) 0:02:53.302 ********* 2025-04-01 20:08:54.444275 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.444288 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.444300 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.444313 | orchestrator | 2025-04-01 20:08:54.444325 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-04-01 20:08:54.444337 | orchestrator | Tuesday 01 April 2025 20:03:00 +0000 (0:00:12.525) 0:03:05.827 ********* 2025-04-01 20:08:54.444350 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.444367 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.444380 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.444392 | orchestrator | 2025-04-01 20:08:54.444404 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-01 20:08:54.444417 | orchestrator | Tuesday 01 April 2025 20:03:02 +0000 (0:00:01.963) 0:03:07.791 ********* 2025-04-01 20:08:54.444430 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.444443 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.444455 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.444467 | orchestrator | 2025-04-01 20:08:54.444480 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-04-01 20:08:54.444492 | orchestrator | 2025-04-01 20:08:54.444505 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-01 20:08:54.444524 | orchestrator | Tuesday 01 April 2025 20:03:02 +0000 (0:00:00.520) 0:03:08.311 ********* 2025-04-01 20:08:54.444537 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.444550 | orchestrator | 2025-04-01 20:08:54.444562 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-04-01 20:08:54.444575 | orchestrator | Tuesday 01 April 2025 20:03:03 +0000 (0:00:00.909) 0:03:09.221 ********* 2025-04-01 20:08:54.444588 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-04-01 20:08:54.444600 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-04-01 20:08:54.444613 | orchestrator | 2025-04-01 20:08:54.444625 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-04-01 20:08:54.444638 | orchestrator | Tuesday 01 April 2025 20:03:06 +0000 (0:00:03.212) 0:03:12.433 ********* 2025-04-01 20:08:54.444650 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-04-01 20:08:54.444663 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-04-01 20:08:54.444676 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-04-01 20:08:54.444688 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-04-01 20:08:54.444701 | orchestrator | 2025-04-01 20:08:54.444713 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-04-01 20:08:54.444726 | orchestrator | Tuesday 01 April 2025 20:03:12 +0000 (0:00:05.664) 0:03:18.097 ********* 2025-04-01 20:08:54.444739 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 20:08:54.444751 | orchestrator | 2025-04-01 20:08:54.444763 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-04-01 20:08:54.444776 | orchestrator | Tuesday 01 April 2025 20:03:15 +0000 (0:00:02.840) 0:03:20.938 ********* 2025-04-01 20:08:54.444789 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 20:08:54.444801 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-04-01 20:08:54.444814 | orchestrator | 2025-04-01 20:08:54.444842 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-04-01 20:08:54.444855 | orchestrator | Tuesday 01 April 2025 20:03:19 +0000 (0:00:04.178) 0:03:25.117 ********* 2025-04-01 20:08:54.444868 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 20:08:54.444880 | orchestrator | 2025-04-01 20:08:54.444893 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-04-01 20:08:54.444906 | orchestrator | Tuesday 01 April 2025 20:03:22 +0000 (0:00:03.296) 0:03:28.414 ********* 2025-04-01 20:08:54.444918 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-04-01 20:08:54.444931 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-04-01 20:08:54.444943 | orchestrator | 2025-04-01 20:08:54.444955 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-01 20:08:54.445149 | orchestrator | Tuesday 01 April 2025 20:03:30 +0000 (0:00:07.443) 0:03:35.857 ********* 2025-04-01 20:08:54.445206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.445233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.445248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.445341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.445362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.445398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.445412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.445425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.445439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.445452 | orchestrator | 2025-04-01 20:08:54.445465 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-04-01 20:08:54.445478 | orchestrator | Tuesday 01 April 2025 20:03:32 +0000 (0:00:02.355) 0:03:38.213 ********* 2025-04-01 20:08:54.445490 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.445503 | orchestrator | 2025-04-01 20:08:54.445515 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-04-01 20:08:54.445528 | orchestrator | Tuesday 01 April 2025 20:03:32 +0000 (0:00:00.153) 0:03:38.366 ********* 2025-04-01 20:08:54.445540 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.445553 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.445565 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.445578 | orchestrator | 2025-04-01 20:08:54.445590 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-04-01 20:08:54.445603 | orchestrator | Tuesday 01 April 2025 20:03:33 +0000 (0:00:00.787) 0:03:39.154 ********* 2025-04-01 20:08:54.445615 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-01 20:08:54.445628 | orchestrator | 2025-04-01 20:08:54.445708 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-04-01 20:08:54.445726 | orchestrator | Tuesday 01 April 2025 20:03:34 +0000 (0:00:00.651) 0:03:39.805 ********* 2025-04-01 20:08:54.445754 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.445767 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.445780 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.445792 | orchestrator | 2025-04-01 20:08:54.445805 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-01 20:08:54.445915 | orchestrator | Tuesday 01 April 2025 20:03:35 +0000 (0:00:01.083) 0:03:40.889 ********* 2025-04-01 20:08:54.445931 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.445944 | orchestrator | 2025-04-01 20:08:54.445957 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-01 20:08:54.445969 | orchestrator | Tuesday 01 April 2025 20:03:36 +0000 (0:00:01.262) 0:03:42.152 ********* 2025-04-01 20:08:54.445983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.446010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.446144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.446188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.446203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.446216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.446230 | orchestrator | 2025-04-01 20:08:54.446242 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-01 20:08:54.446255 | orchestrator | Tuesday 01 April 2025 20:03:39 +0000 (0:00:03.494) 0:03:45.646 ********* 2025-04-01 20:08:54.446268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.446291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446368 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.446384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.446395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446406 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.446416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.446439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446455 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.446466 | orchestrator | 2025-04-01 20:08:54.446476 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-01 20:08:54.446487 | orchestrator | Tuesday 01 April 2025 20:03:41 +0000 (0:00:01.236) 0:03:46.882 ********* 2025-04-01 20:08:54.446552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.446569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446580 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.446591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.446613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446630 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.446695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.446711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446722 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.446732 | orchestrator | 2025-04-01 20:08:54.446743 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-04-01 20:08:54.446753 | orchestrator | Tuesday 01 April 2025 20:03:42 +0000 (0:00:01.343) 0:03:48.226 ********* 2025-04-01 20:08:54.446763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.446785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.446875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.446904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.446916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.446927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.446944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447037 | orchestrator | 2025-04-01 20:08:54.447047 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-04-01 20:08:54.447057 | orchestrator | Tuesday 01 April 2025 20:03:45 +0000 (0:00:02.633) 0:03:50.860 ********* 2025-04-01 20:08:54.447068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.447079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.447165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.447182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447318 | orchestrator | 2025-04-01 20:08:54.447329 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-04-01 20:08:54.447339 | orchestrator | Tuesday 01 April 2025 20:03:51 +0000 (0:00:06.860) 0:03:57.721 ********* 2025-04-01 20:08:54.447350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.447361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.447399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447486 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.447497 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.447507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-01 20:08:54.447535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447557 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.447567 | orchestrator | 2025-04-01 20:08:54.447578 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-04-01 20:08:54.447588 | orchestrator | Tuesday 01 April 2025 20:03:53 +0000 (0:00:01.081) 0:03:58.802 ********* 2025-04-01 20:08:54.447598 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.447609 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.447619 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.447629 | orchestrator | 2025-04-01 20:08:54.447639 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-04-01 20:08:54.447650 | orchestrator | Tuesday 01 April 2025 20:03:54 +0000 (0:00:01.734) 0:04:00.536 ********* 2025-04-01 20:08:54.447710 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.447724 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.447735 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.447745 | orchestrator | 2025-04-01 20:08:54.447755 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-04-01 20:08:54.447766 | orchestrator | Tuesday 01 April 2025 20:03:55 +0000 (0:00:00.563) 0:04:01.100 ********* 2025-04-01 20:08:54.447776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.447794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.447831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-01 20:08:54.447899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.447974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.447984 | orchestrator | 2025-04-01 20:08:54.447995 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-01 20:08:54.448005 | orchestrator | Tuesday 01 April 2025 20:03:57 +0000 (0:00:02.108) 0:04:03.209 ********* 2025-04-01 20:08:54.448016 | orchestrator | 2025-04-01 20:08:54.448026 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-01 20:08:54.448036 | orchestrator | Tuesday 01 April 2025 20:03:57 +0000 (0:00:00.295) 0:04:03.505 ********* 2025-04-01 20:08:54.448046 | orchestrator | 2025-04-01 20:08:54.448057 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-01 20:08:54.448067 | orchestrator | Tuesday 01 April 2025 20:03:57 +0000 (0:00:00.128) 0:04:03.634 ********* 2025-04-01 20:08:54.448077 | orchestrator | 2025-04-01 20:08:54.448087 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-04-01 20:08:54.448148 | orchestrator | Tuesday 01 April 2025 20:03:58 +0000 (0:00:00.333) 0:04:03.967 ********* 2025-04-01 20:08:54.448163 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.448173 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.448183 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.448193 | orchestrator | 2025-04-01 20:08:54.448204 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-04-01 20:08:54.448214 | orchestrator | Tuesday 01 April 2025 20:04:09 +0000 (0:00:11.534) 0:04:15.502 ********* 2025-04-01 20:08:54.448224 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.448234 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.448244 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.448255 | orchestrator | 2025-04-01 20:08:54.448265 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-04-01 20:08:54.448281 | orchestrator | 2025-04-01 20:08:54.448292 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-01 20:08:54.448302 | orchestrator | Tuesday 01 April 2025 20:04:20 +0000 (0:00:10.305) 0:04:25.807 ********* 2025-04-01 20:08:54.448312 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.448324 | orchestrator | 2025-04-01 20:08:54.448339 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-01 20:08:54.448349 | orchestrator | Tuesday 01 April 2025 20:04:21 +0000 (0:00:01.651) 0:04:27.459 ********* 2025-04-01 20:08:54.448359 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.448369 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.448379 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.448390 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.448400 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.448410 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.448420 | orchestrator | 2025-04-01 20:08:54.448430 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-04-01 20:08:54.448440 | orchestrator | Tuesday 01 April 2025 20:04:22 +0000 (0:00:00.878) 0:04:28.337 ********* 2025-04-01 20:08:54.448450 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.448460 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.448470 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.448481 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:08:54.448491 | orchestrator | 2025-04-01 20:08:54.448501 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-01 20:08:54.448511 | orchestrator | Tuesday 01 April 2025 20:04:24 +0000 (0:00:01.431) 0:04:29.769 ********* 2025-04-01 20:08:54.448521 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-04-01 20:08:54.448532 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-04-01 20:08:54.448542 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-04-01 20:08:54.448552 | orchestrator | 2025-04-01 20:08:54.448562 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-01 20:08:54.448572 | orchestrator | Tuesday 01 April 2025 20:04:24 +0000 (0:00:00.726) 0:04:30.495 ********* 2025-04-01 20:08:54.448583 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-04-01 20:08:54.448593 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-04-01 20:08:54.448603 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-04-01 20:08:54.448613 | orchestrator | 2025-04-01 20:08:54.448623 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-01 20:08:54.448634 | orchestrator | Tuesday 01 April 2025 20:04:26 +0000 (0:00:01.549) 0:04:32.044 ********* 2025-04-01 20:08:54.448644 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-04-01 20:08:54.448654 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.448664 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-04-01 20:08:54.448674 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.448684 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-04-01 20:08:54.448694 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.448705 | orchestrator | 2025-04-01 20:08:54.448715 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-04-01 20:08:54.448725 | orchestrator | Tuesday 01 April 2025 20:04:27 +0000 (0:00:00.937) 0:04:32.982 ********* 2025-04-01 20:08:54.448735 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-01 20:08:54.448746 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-01 20:08:54.448756 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-01 20:08:54.448766 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-01 20:08:54.448782 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.448794 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-01 20:08:54.448805 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-01 20:08:54.448860 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.448874 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-01 20:08:54.448885 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-01 20:08:54.448896 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.448908 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-01 20:08:54.448919 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-01 20:08:54.448931 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-01 20:08:54.448943 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-01 20:08:54.448954 | orchestrator | 2025-04-01 20:08:54.449019 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-04-01 20:08:54.449032 | orchestrator | Tuesday 01 April 2025 20:04:29 +0000 (0:00:02.179) 0:04:35.161 ********* 2025-04-01 20:08:54.449041 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.449049 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.449065 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.449074 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.449083 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.449091 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.449100 | orchestrator | 2025-04-01 20:08:54.449108 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-04-01 20:08:54.449117 | orchestrator | Tuesday 01 April 2025 20:04:30 +0000 (0:00:01.118) 0:04:36.279 ********* 2025-04-01 20:08:54.449126 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.449134 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.449143 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.449151 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.449160 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.449169 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.449177 | orchestrator | 2025-04-01 20:08:54.449186 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-01 20:08:54.449194 | orchestrator | Tuesday 01 April 2025 20:04:32 +0000 (0:00:02.241) 0:04:38.521 ********* 2025-04-01 20:08:54.449204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.449340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.449366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.449422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.449443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.449453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.449549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.449604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.449716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.449797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.449810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.449848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.449866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.449978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.449996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.450117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.450127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450163 | orchestrator | 2025-04-01 20:08:54.450172 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-01 20:08:54.450181 | orchestrator | Tuesday 01 April 2025 20:04:36 +0000 (0:00:03.339) 0:04:41.861 ********* 2025-04-01 20:08:54.450190 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:08:54.450199 | orchestrator | 2025-04-01 20:08:54.450208 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-01 20:08:54.450216 | orchestrator | Tuesday 01 April 2025 20:04:37 +0000 (0:00:01.609) 0:04:43.471 ********* 2025-04-01 20:08:54.450273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.450557 | orchestrator | 2025-04-01 20:08:54.450566 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-01 20:08:54.450574 | orchestrator | Tuesday 01 April 2025 20:04:42 +0000 (0:00:04.804) 0:04:48.275 ********* 2025-04-01 20:08:54.450583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.450592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.450646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450664 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.450681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.450691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.450700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450709 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.450718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.450802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.450838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450848 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.450857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.450875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450884 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.450893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.450902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450911 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.450966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.450987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.450997 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.451005 | orchestrator | 2025-04-01 20:08:54.451014 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-01 20:08:54.451023 | orchestrator | Tuesday 01 April 2025 20:04:44 +0000 (0:00:01.968) 0:04:50.244 ********* 2025-04-01 20:08:54.451040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.451050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.451059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.451089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.451107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.451123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.451133 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.451142 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.451151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.451160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.451169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.451182 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.451211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.451222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.451239 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.451248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.451257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.451266 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.451278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.451288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.451302 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.451311 | orchestrator | 2025-04-01 20:08:54.451320 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-01 20:08:54.451329 | orchestrator | Tuesday 01 April 2025 20:04:47 +0000 (0:00:02.645) 0:04:52.890 ********* 2025-04-01 20:08:54.451337 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.451346 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.451355 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.451363 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-01 20:08:54.451372 | orchestrator | 2025-04-01 20:08:54.451381 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-04-01 20:08:54.451389 | orchestrator | Tuesday 01 April 2025 20:04:48 +0000 (0:00:01.265) 0:04:54.155 ********* 2025-04-01 20:08:54.451417 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-01 20:08:54.451427 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-01 20:08:54.451436 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-01 20:08:54.451444 | orchestrator | 2025-04-01 20:08:54.451453 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-04-01 20:08:54.451462 | orchestrator | Tuesday 01 April 2025 20:04:49 +0000 (0:00:00.990) 0:04:55.145 ********* 2025-04-01 20:08:54.451471 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-01 20:08:54.451479 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-01 20:08:54.451488 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-01 20:08:54.451497 | orchestrator | 2025-04-01 20:08:54.451505 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-04-01 20:08:54.451514 | orchestrator | Tuesday 01 April 2025 20:04:50 +0000 (0:00:00.956) 0:04:56.102 ********* 2025-04-01 20:08:54.451523 | orchestrator | ok: [testbed-node-3] 2025-04-01 20:08:54.451531 | orchestrator | ok: [testbed-node-4] 2025-04-01 20:08:54.451540 | orchestrator | ok: [testbed-node-5] 2025-04-01 20:08:54.451548 | orchestrator | 2025-04-01 20:08:54.451557 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-04-01 20:08:54.451566 | orchestrator | Tuesday 01 April 2025 20:04:51 +0000 (0:00:00.884) 0:04:56.986 ********* 2025-04-01 20:08:54.451574 | orchestrator | ok: [testbed-node-3] 2025-04-01 20:08:54.451583 | orchestrator | ok: [testbed-node-4] 2025-04-01 20:08:54.451592 | orchestrator | ok: [testbed-node-5] 2025-04-01 20:08:54.451601 | orchestrator | 2025-04-01 20:08:54.451611 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-04-01 20:08:54.451620 | orchestrator | Tuesday 01 April 2025 20:04:51 +0000 (0:00:00.370) 0:04:57.357 ********* 2025-04-01 20:08:54.451630 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-01 20:08:54.451641 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-01 20:08:54.451650 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-01 20:08:54.451660 | orchestrator | 2025-04-01 20:08:54.451670 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-04-01 20:08:54.451679 | orchestrator | Tuesday 01 April 2025 20:04:53 +0000 (0:00:01.564) 0:04:58.921 ********* 2025-04-01 20:08:54.451689 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-01 20:08:54.451698 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-01 20:08:54.451708 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-01 20:08:54.451718 | orchestrator | 2025-04-01 20:08:54.451728 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-04-01 20:08:54.451742 | orchestrator | Tuesday 01 April 2025 20:04:54 +0000 (0:00:01.331) 0:05:00.253 ********* 2025-04-01 20:08:54.451752 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-01 20:08:54.451762 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-01 20:08:54.451771 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-01 20:08:54.451781 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-04-01 20:08:54.451791 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-04-01 20:08:54.451800 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-04-01 20:08:54.451810 | orchestrator | 2025-04-01 20:08:54.451835 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-04-01 20:08:54.451845 | orchestrator | Tuesday 01 April 2025 20:05:00 +0000 (0:00:05.941) 0:05:06.195 ********* 2025-04-01 20:08:54.451855 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.451865 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.451874 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.451884 | orchestrator | 2025-04-01 20:08:54.451894 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-04-01 20:08:54.451904 | orchestrator | Tuesday 01 April 2025 20:05:00 +0000 (0:00:00.361) 0:05:06.557 ********* 2025-04-01 20:08:54.451913 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.451923 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.451932 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.451942 | orchestrator | 2025-04-01 20:08:54.451952 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-04-01 20:08:54.451961 | orchestrator | Tuesday 01 April 2025 20:05:01 +0000 (0:00:00.549) 0:05:07.107 ********* 2025-04-01 20:08:54.451970 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.451978 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.451987 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.451996 | orchestrator | 2025-04-01 20:08:54.452004 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-04-01 20:08:54.452013 | orchestrator | Tuesday 01 April 2025 20:05:03 +0000 (0:00:01.648) 0:05:08.755 ********* 2025-04-01 20:08:54.452022 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-01 20:08:54.452031 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-01 20:08:54.452043 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-01 20:08:54.452053 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-01 20:08:54.452061 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-01 20:08:54.452070 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-01 20:08:54.452079 | orchestrator | 2025-04-01 20:08:54.452091 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-04-01 20:08:54.452120 | orchestrator | Tuesday 01 April 2025 20:05:06 +0000 (0:00:03.696) 0:05:12.452 ********* 2025-04-01 20:08:54.452130 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-01 20:08:54.452139 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-01 20:08:54.452147 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-01 20:08:54.452159 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-01 20:08:54.452168 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.452177 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-01 20:08:54.452191 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.452200 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-01 20:08:54.452209 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.452217 | orchestrator | 2025-04-01 20:08:54.452226 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-04-01 20:08:54.452235 | orchestrator | Tuesday 01 April 2025 20:05:10 +0000 (0:00:03.690) 0:05:16.142 ********* 2025-04-01 20:08:54.452244 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.452252 | orchestrator | 2025-04-01 20:08:54.452261 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-04-01 20:08:54.452269 | orchestrator | Tuesday 01 April 2025 20:05:10 +0000 (0:00:00.146) 0:05:16.289 ********* 2025-04-01 20:08:54.452278 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.452287 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.452301 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.452311 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.452320 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.452328 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.452337 | orchestrator | 2025-04-01 20:08:54.452346 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-04-01 20:08:54.452355 | orchestrator | Tuesday 01 April 2025 20:05:11 +0000 (0:00:01.007) 0:05:17.297 ********* 2025-04-01 20:08:54.452364 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-01 20:08:54.452372 | orchestrator | 2025-04-01 20:08:54.452381 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-04-01 20:08:54.452389 | orchestrator | Tuesday 01 April 2025 20:05:11 +0000 (0:00:00.428) 0:05:17.725 ********* 2025-04-01 20:08:54.452398 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.452406 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.452415 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.452424 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.452432 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.452441 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.452450 | orchestrator | 2025-04-01 20:08:54.452458 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-04-01 20:08:54.452467 | orchestrator | Tuesday 01 April 2025 20:05:13 +0000 (0:00:01.016) 0:05:18.741 ********* 2025-04-01 20:08:54.452476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.452527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.452538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.452547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.452556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.452590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.452620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.452666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.452707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.452737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.452769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.452848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.452858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.452890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.452940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.452956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.452966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.452980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453209 | orchestrator | 2025-04-01 20:08:54.453218 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-04-01 20:08:54.453227 | orchestrator | Tuesday 01 April 2025 20:05:17 +0000 (0:00:04.173) 0:05:22.915 ********* 2025-04-01 20:08:54.453236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.453245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.453254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.453305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.453335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.453344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.453379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.453426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.453436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.453468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.453514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.453524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.453538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.453547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.453556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.453589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.453711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.453756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.453776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.453790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.453946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.453964 | orchestrator | 2025-04-01 20:08:54.453973 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-04-01 20:08:54.453982 | orchestrator | Tuesday 01 April 2025 20:05:25 +0000 (0:00:08.740) 0:05:31.655 ********* 2025-04-01 20:08:54.453991 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.454000 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.454008 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454050 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.454061 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454070 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454078 | orchestrator | 2025-04-01 20:08:54.454087 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-04-01 20:08:54.454096 | orchestrator | Tuesday 01 April 2025 20:05:27 +0000 (0:00:01.925) 0:05:33.580 ********* 2025-04-01 20:08:54.454104 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-01 20:08:54.454113 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-01 20:08:54.454122 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-01 20:08:54.454131 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-01 20:08:54.454139 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454170 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-01 20:08:54.454180 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454196 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-01 20:08:54.454204 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-01 20:08:54.454212 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454221 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-01 20:08:54.454229 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-01 20:08:54.454237 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-01 20:08:54.454245 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-01 20:08:54.454253 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-01 20:08:54.454261 | orchestrator | 2025-04-01 20:08:54.454269 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-04-01 20:08:54.454277 | orchestrator | Tuesday 01 April 2025 20:05:33 +0000 (0:00:05.949) 0:05:39.530 ********* 2025-04-01 20:08:54.454285 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.454293 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.454300 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.454308 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454316 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454324 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454332 | orchestrator | 2025-04-01 20:08:54.454340 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-04-01 20:08:54.454348 | orchestrator | Tuesday 01 April 2025 20:05:34 +0000 (0:00:01.018) 0:05:40.548 ********* 2025-04-01 20:08:54.454356 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-01 20:08:54.454364 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-01 20:08:54.454373 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-01 20:08:54.454381 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-01 20:08:54.454389 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-01 20:08:54.454400 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-01 20:08:54.454408 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-01 20:08:54.454416 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-01 20:08:54.454424 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454432 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-01 20:08:54.454440 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-01 20:08:54.454449 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-01 20:08:54.454457 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454465 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-01 20:08:54.454473 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454481 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-01 20:08:54.454489 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-01 20:08:54.454501 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-01 20:08:54.454509 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-01 20:08:54.454517 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-01 20:08:54.454525 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-01 20:08:54.454533 | orchestrator | 2025-04-01 20:08:54.454541 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-04-01 20:08:54.454549 | orchestrator | Tuesday 01 April 2025 20:05:43 +0000 (0:00:08.317) 0:05:48.866 ********* 2025-04-01 20:08:54.454557 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-01 20:08:54.454565 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-01 20:08:54.454591 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-01 20:08:54.454600 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-01 20:08:54.454608 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-01 20:08:54.454620 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-01 20:08:54.454628 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-01 20:08:54.454636 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-01 20:08:54.454644 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-01 20:08:54.454652 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-01 20:08:54.454663 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-01 20:08:54.454671 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-01 20:08:54.454680 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454688 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-01 20:08:54.454696 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454704 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-01 20:08:54.454712 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-01 20:08:54.454720 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454728 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-01 20:08:54.454736 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-01 20:08:54.454744 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-01 20:08:54.454752 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-01 20:08:54.454760 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-01 20:08:54.454768 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-01 20:08:54.454776 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-01 20:08:54.454784 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-01 20:08:54.454792 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-01 20:08:54.454800 | orchestrator | 2025-04-01 20:08:54.454808 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-04-01 20:08:54.454834 | orchestrator | Tuesday 01 April 2025 20:05:55 +0000 (0:00:12.120) 0:06:00.986 ********* 2025-04-01 20:08:54.454843 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.454851 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.454859 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.454867 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454875 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454883 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454891 | orchestrator | 2025-04-01 20:08:54.454899 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-04-01 20:08:54.454907 | orchestrator | Tuesday 01 April 2025 20:05:56 +0000 (0:00:00.797) 0:06:01.784 ********* 2025-04-01 20:08:54.454916 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.454924 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.454932 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.454940 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454948 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.454955 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.454963 | orchestrator | 2025-04-01 20:08:54.454972 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-04-01 20:08:54.454980 | orchestrator | Tuesday 01 April 2025 20:05:57 +0000 (0:00:00.991) 0:06:02.775 ********* 2025-04-01 20:08:54.454988 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.454996 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.455004 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.455012 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.455019 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.455027 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.455039 | orchestrator | 2025-04-01 20:08:54.455047 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-04-01 20:08:54.455055 | orchestrator | Tuesday 01 April 2025 20:06:00 +0000 (0:00:03.083) 0:06:05.858 ********* 2025-04-01 20:08:54.455083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.455139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455184 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.455193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.455269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455300 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.455316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.455367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455398 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.455407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.455464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455496 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.455509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.455587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.455620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.455639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455653 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.455661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.455686 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.455694 | orchestrator | 2025-04-01 20:08:54.455702 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-04-01 20:08:54.455710 | orchestrator | Tuesday 01 April 2025 20:06:02 +0000 (0:00:02.518) 0:06:08.377 ********* 2025-04-01 20:08:54.455719 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-01 20:08:54.455727 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-01 20:08:54.455735 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.455743 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-01 20:08:54.455751 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-01 20:08:54.455759 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.455767 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-01 20:08:54.455776 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-01 20:08:54.455784 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.455792 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-01 20:08:54.455800 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-01 20:08:54.455808 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.455829 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-01 20:08:54.455838 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-01 20:08:54.455846 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.455854 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-01 20:08:54.455862 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-01 20:08:54.455874 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.455883 | orchestrator | 2025-04-01 20:08:54.455891 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-04-01 20:08:54.455899 | orchestrator | Tuesday 01 April 2025 20:06:03 +0000 (0:00:00.937) 0:06:09.315 ********* 2025-04-01 20:08:54.455919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.455946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.455961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.455987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-01 20:08:54.455995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-01 20:08:54.456004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.456106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.456127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.456152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.456193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.456216 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-01 20:08:54.456247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-01 20:08:54.456255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-01 20:08:54.456420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-01 20:08:54.456433 | orchestrator | 2025-04-01 20:08:54.456441 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-01 20:08:54.456449 | orchestrator | Tuesday 01 April 2025 20:06:07 +0000 (0:00:04.033) 0:06:13.349 ********* 2025-04-01 20:08:54.456457 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.456465 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.456473 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.456481 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.456489 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.456497 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.456505 | orchestrator | 2025-04-01 20:08:54.456513 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-01 20:08:54.456521 | orchestrator | Tuesday 01 April 2025 20:06:08 +0000 (0:00:00.887) 0:06:14.236 ********* 2025-04-01 20:08:54.456529 | orchestrator | 2025-04-01 20:08:54.456537 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-01 20:08:54.456545 | orchestrator | Tuesday 01 April 2025 20:06:08 +0000 (0:00:00.305) 0:06:14.542 ********* 2025-04-01 20:08:54.456553 | orchestrator | 2025-04-01 20:08:54.456561 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-01 20:08:54.456573 | orchestrator | Tuesday 01 April 2025 20:06:08 +0000 (0:00:00.118) 0:06:14.660 ********* 2025-04-01 20:08:54.456581 | orchestrator | 2025-04-01 20:08:54.456589 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-01 20:08:54.456598 | orchestrator | Tuesday 01 April 2025 20:06:09 +0000 (0:00:00.325) 0:06:14.986 ********* 2025-04-01 20:08:54.456606 | orchestrator | 2025-04-01 20:08:54.456614 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-01 20:08:54.456622 | orchestrator | Tuesday 01 April 2025 20:06:09 +0000 (0:00:00.128) 0:06:15.114 ********* 2025-04-01 20:08:54.456630 | orchestrator | 2025-04-01 20:08:54.456638 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-01 20:08:54.456646 | orchestrator | Tuesday 01 April 2025 20:06:09 +0000 (0:00:00.305) 0:06:15.420 ********* 2025-04-01 20:08:54.456654 | orchestrator | 2025-04-01 20:08:54.456662 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-04-01 20:08:54.456670 | orchestrator | Tuesday 01 April 2025 20:06:09 +0000 (0:00:00.126) 0:06:15.546 ********* 2025-04-01 20:08:54.456678 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.456686 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.456694 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.456702 | orchestrator | 2025-04-01 20:08:54.456710 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-04-01 20:08:54.456718 | orchestrator | Tuesday 01 April 2025 20:06:17 +0000 (0:00:07.961) 0:06:23.507 ********* 2025-04-01 20:08:54.456726 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.456734 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.456742 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.456750 | orchestrator | 2025-04-01 20:08:54.456758 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-04-01 20:08:54.456766 | orchestrator | Tuesday 01 April 2025 20:06:29 +0000 (0:00:11.345) 0:06:34.853 ********* 2025-04-01 20:08:54.456777 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.456785 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.456794 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.456802 | orchestrator | 2025-04-01 20:08:54.456810 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-04-01 20:08:54.456831 | orchestrator | Tuesday 01 April 2025 20:06:48 +0000 (0:00:19.776) 0:06:54.629 ********* 2025-04-01 20:08:54.456839 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.456852 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.456860 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.456868 | orchestrator | 2025-04-01 20:08:54.456876 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-04-01 20:08:54.456884 | orchestrator | Tuesday 01 April 2025 20:07:14 +0000 (0:00:25.138) 0:07:19.768 ********* 2025-04-01 20:08:54.456892 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.456900 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.456908 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.456916 | orchestrator | 2025-04-01 20:08:54.456925 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-04-01 20:08:54.456933 | orchestrator | Tuesday 01 April 2025 20:07:15 +0000 (0:00:01.066) 0:07:20.834 ********* 2025-04-01 20:08:54.456941 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.456949 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.456957 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.456965 | orchestrator | 2025-04-01 20:08:54.456973 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-04-01 20:08:54.456981 | orchestrator | Tuesday 01 April 2025 20:07:15 +0000 (0:00:00.736) 0:07:21.570 ********* 2025-04-01 20:08:54.456989 | orchestrator | changed: [testbed-node-4] 2025-04-01 20:08:54.456997 | orchestrator | changed: [testbed-node-5] 2025-04-01 20:08:54.457005 | orchestrator | changed: [testbed-node-3] 2025-04-01 20:08:54.457013 | orchestrator | 2025-04-01 20:08:54.457021 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-04-01 20:08:54.457029 | orchestrator | Tuesday 01 April 2025 20:07:37 +0000 (0:00:21.285) 0:07:42.856 ********* 2025-04-01 20:08:54.457037 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.457045 | orchestrator | 2025-04-01 20:08:54.457053 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-04-01 20:08:54.457061 | orchestrator | Tuesday 01 April 2025 20:07:37 +0000 (0:00:00.134) 0:07:42.991 ********* 2025-04-01 20:08:54.457069 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.457077 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.457085 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.457093 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.457101 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.457109 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-04-01 20:08:54.457118 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-01 20:08:54.457126 | orchestrator | 2025-04-01 20:08:54.457134 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-04-01 20:08:54.457142 | orchestrator | Tuesday 01 April 2025 20:07:59 +0000 (0:00:22.490) 0:08:05.481 ********* 2025-04-01 20:08:54.457150 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.457159 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.457167 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.457175 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.457183 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.457191 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.457202 | orchestrator | 2025-04-01 20:08:54.457211 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-04-01 20:08:54.457219 | orchestrator | Tuesday 01 April 2025 20:08:12 +0000 (0:00:12.612) 0:08:18.094 ********* 2025-04-01 20:08:54.457227 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.457235 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.457243 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.457251 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.457259 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.457267 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-04-01 20:08:54.457275 | orchestrator | 2025-04-01 20:08:54.457283 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-01 20:08:54.457298 | orchestrator | Tuesday 01 April 2025 20:08:18 +0000 (0:00:05.735) 0:08:23.830 ********* 2025-04-01 20:08:54.457307 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-01 20:08:54.457315 | orchestrator | 2025-04-01 20:08:54.457323 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-01 20:08:54.457331 | orchestrator | Tuesday 01 April 2025 20:08:29 +0000 (0:00:11.866) 0:08:35.696 ********* 2025-04-01 20:08:54.457339 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-01 20:08:54.457347 | orchestrator | 2025-04-01 20:08:54.457355 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-04-01 20:08:54.457363 | orchestrator | Tuesday 01 April 2025 20:08:31 +0000 (0:00:01.277) 0:08:36.973 ********* 2025-04-01 20:08:54.457371 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.457379 | orchestrator | 2025-04-01 20:08:54.457387 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-04-01 20:08:54.457395 | orchestrator | Tuesday 01 April 2025 20:08:32 +0000 (0:00:01.608) 0:08:38.582 ********* 2025-04-01 20:08:54.457403 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-01 20:08:54.457411 | orchestrator | 2025-04-01 20:08:54.457419 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-04-01 20:08:54.457427 | orchestrator | Tuesday 01 April 2025 20:08:42 +0000 (0:00:09.916) 0:08:48.498 ********* 2025-04-01 20:08:54.457435 | orchestrator | ok: [testbed-node-3] 2025-04-01 20:08:54.457443 | orchestrator | ok: [testbed-node-4] 2025-04-01 20:08:54.457451 | orchestrator | ok: [testbed-node-5] 2025-04-01 20:08:54.457460 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:08:54.457467 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:08:54.457475 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:08:54.457483 | orchestrator | 2025-04-01 20:08:54.457494 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-04-01 20:08:54.457503 | orchestrator | 2025-04-01 20:08:54.457511 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-04-01 20:08:54.457519 | orchestrator | Tuesday 01 April 2025 20:08:45 +0000 (0:00:03.021) 0:08:51.520 ********* 2025-04-01 20:08:54.457527 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:08:54.457535 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:08:54.457543 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:08:54.457551 | orchestrator | 2025-04-01 20:08:54.457559 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-04-01 20:08:54.457567 | orchestrator | 2025-04-01 20:08:54.457575 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-04-01 20:08:54.457583 | orchestrator | Tuesday 01 April 2025 20:08:46 +0000 (0:00:01.046) 0:08:52.567 ********* 2025-04-01 20:08:54.457591 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.457600 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.457608 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.457616 | orchestrator | 2025-04-01 20:08:54.457624 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-04-01 20:08:54.457632 | orchestrator | 2025-04-01 20:08:54.457640 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-04-01 20:08:54.457648 | orchestrator | Tuesday 01 April 2025 20:08:47 +0000 (0:00:00.908) 0:08:53.475 ********* 2025-04-01 20:08:54.457656 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-04-01 20:08:54.457664 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-01 20:08:54.457672 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-01 20:08:54.457680 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-04-01 20:08:54.457688 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-04-01 20:08:54.457696 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-04-01 20:08:54.457704 | orchestrator | skipping: [testbed-node-3] 2025-04-01 20:08:54.457718 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-04-01 20:08:54.457727 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-01 20:08:54.457735 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-01 20:08:54.457743 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-04-01 20:08:54.457751 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-04-01 20:08:54.457759 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-04-01 20:08:54.457767 | orchestrator | skipping: [testbed-node-4] 2025-04-01 20:08:54.457775 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-04-01 20:08:54.457783 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-01 20:08:54.457791 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-01 20:08:54.457799 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-04-01 20:08:54.457807 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-04-01 20:08:54.457848 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-04-01 20:08:54.457858 | orchestrator | skipping: [testbed-node-5] 2025-04-01 20:08:54.457866 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-04-01 20:08:54.457874 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-01 20:08:54.457882 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-01 20:08:54.457890 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-04-01 20:08:54.457898 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-04-01 20:08:54.457906 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-04-01 20:08:54.457914 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.457922 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-04-01 20:08:54.457930 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-01 20:08:54.457938 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-01 20:08:54.457946 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-04-01 20:08:54.457957 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-04-01 20:08:54.457965 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-04-01 20:08:54.457974 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.457982 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-04-01 20:08:54.457990 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-01 20:08:54.457998 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-01 20:08:54.458006 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-04-01 20:08:54.458031 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-04-01 20:08:54.458040 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-04-01 20:08:54.458048 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:54.458057 | orchestrator | 2025-04-01 20:08:54.458065 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-04-01 20:08:54.458073 | orchestrator | 2025-04-01 20:08:54.458081 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-04-01 20:08:54.458089 | orchestrator | Tuesday 01 April 2025 20:08:49 +0000 (0:00:01.691) 0:08:55.166 ********* 2025-04-01 20:08:54.458097 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-04-01 20:08:54.458105 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-04-01 20:08:54.458114 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:54.458122 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-04-01 20:08:54.458130 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-04-01 20:08:54.458138 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:54.458150 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-04-01 20:08:57.506567 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-04-01 20:08:57.506688 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:57.506709 | orchestrator | 2025-04-01 20:08:57.506724 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-04-01 20:08:57.506740 | orchestrator | 2025-04-01 20:08:57.506754 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-04-01 20:08:57.506768 | orchestrator | Tuesday 01 April 2025 20:08:50 +0000 (0:00:00.705) 0:08:55.872 ********* 2025-04-01 20:08:57.506782 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:57.506797 | orchestrator | 2025-04-01 20:08:57.506811 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-04-01 20:08:57.506874 | orchestrator | 2025-04-01 20:08:57.506889 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-04-01 20:08:57.506904 | orchestrator | Tuesday 01 April 2025 20:08:51 +0000 (0:00:01.054) 0:08:56.926 ********* 2025-04-01 20:08:57.506918 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:08:57.506932 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:08:57.506947 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:08:57.506961 | orchestrator | 2025-04-01 20:08:57.506975 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:08:57.506989 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-01 20:08:57.507006 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-04-01 20:08:57.507021 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-01 20:08:57.507036 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-01 20:08:57.507050 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-04-01 20:08:57.507064 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-04-01 20:08:57.507078 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-04-01 20:08:57.507092 | orchestrator | 2025-04-01 20:08:57.507106 | orchestrator | 2025-04-01 20:08:57.507120 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:08:57.507194 | orchestrator | Tuesday 01 April 2025 20:08:51 +0000 (0:00:00.579) 0:08:57.506 ********* 2025-04-01 20:08:57.507213 | orchestrator | =============================================================================== 2025-04-01 20:08:57.507227 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.94s 2025-04-01 20:08:57.507264 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.14s 2025-04-01 20:08:57.507279 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.04s 2025-04-01 20:08:57.507293 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.49s 2025-04-01 20:08:57.507307 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.29s 2025-04-01 20:08:57.507321 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.78s 2025-04-01 20:08:57.507335 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.41s 2025-04-01 20:08:57.507349 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.96s 2025-04-01 20:08:57.507363 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 12.61s 2025-04-01 20:08:57.507402 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.53s 2025-04-01 20:08:57.507417 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.48s 2025-04-01 20:08:57.507431 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.46s 2025-04-01 20:08:57.507445 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 12.12s 2025-04-01 20:08:57.507459 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.87s 2025-04-01 20:08:57.507473 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 11.53s 2025-04-01 20:08:57.507487 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.35s 2025-04-01 20:08:57.507502 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.31s 2025-04-01 20:08:57.507516 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.92s 2025-04-01 20:08:57.507530 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.15s 2025-04-01 20:08:57.507544 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.74s 2025-04-01 20:08:57.507560 | orchestrator | 2025-04-01 20:08:54 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:08:57.507575 | orchestrator | 2025-04-01 20:08:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:08:57.507608 | orchestrator | 2025-04-01 20:08:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:08:57.509010 | orchestrator | 2025-04-01 20:08:57 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state STARTED 2025-04-01 20:09:00.563948 | orchestrator | 2025-04-01 20:08:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:00.564076 | orchestrator | 2025-04-01 20:09:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:00.567218 | orchestrator | 2025-04-01 20:09:00 | INFO  | Task 38938b93-3e07-4853-b0de-591a51bd9ece is in state SUCCESS 2025-04-01 20:09:00.570199 | orchestrator | 2025-04-01 20:09:00.570292 | orchestrator | 2025-04-01 20:09:00.570314 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-01 20:09:00.570330 | orchestrator | 2025-04-01 20:09:00.570345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-01 20:09:00.570360 | orchestrator | Tuesday 01 April 2025 20:03:38 +0000 (0:00:00.428) 0:00:00.428 ********* 2025-04-01 20:09:00.570375 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.570391 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:09:00.570743 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:09:00.570762 | orchestrator | 2025-04-01 20:09:00.570777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-01 20:09:00.570791 | orchestrator | Tuesday 01 April 2025 20:03:38 +0000 (0:00:00.503) 0:00:00.932 ********* 2025-04-01 20:09:00.570806 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-04-01 20:09:00.570853 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-04-01 20:09:00.570869 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-04-01 20:09:00.570883 | orchestrator | 2025-04-01 20:09:00.570898 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-04-01 20:09:00.570912 | orchestrator | 2025-04-01 20:09:00.570926 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-01 20:09:00.570940 | orchestrator | Tuesday 01 April 2025 20:03:39 +0000 (0:00:00.391) 0:00:01.324 ********* 2025-04-01 20:09:00.570955 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:09:00.570970 | orchestrator | 2025-04-01 20:09:00.570984 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-04-01 20:09:00.570998 | orchestrator | Tuesday 01 April 2025 20:03:40 +0000 (0:00:01.105) 0:00:02.429 ********* 2025-04-01 20:09:00.571037 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-04-01 20:09:00.571052 | orchestrator | 2025-04-01 20:09:00.571066 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-04-01 20:09:00.571080 | orchestrator | Tuesday 01 April 2025 20:03:43 +0000 (0:00:03.459) 0:00:05.888 ********* 2025-04-01 20:09:00.571094 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-04-01 20:09:00.571109 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-04-01 20:09:00.571124 | orchestrator | 2025-04-01 20:09:00.571138 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-04-01 20:09:00.571158 | orchestrator | Tuesday 01 April 2025 20:03:49 +0000 (0:00:05.819) 0:00:11.708 ********* 2025-04-01 20:09:00.571172 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-01 20:09:00.571187 | orchestrator | 2025-04-01 20:09:00.571200 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-04-01 20:09:00.571215 | orchestrator | Tuesday 01 April 2025 20:03:52 +0000 (0:00:03.389) 0:00:15.098 ********* 2025-04-01 20:09:00.571229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-01 20:09:00.571243 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-01 20:09:00.571258 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-01 20:09:00.571272 | orchestrator | 2025-04-01 20:09:00.571286 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-04-01 20:09:00.571300 | orchestrator | Tuesday 01 April 2025 20:04:00 +0000 (0:00:07.711) 0:00:22.810 ********* 2025-04-01 20:09:00.571314 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-01 20:09:00.571328 | orchestrator | 2025-04-01 20:09:00.571342 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-04-01 20:09:00.571356 | orchestrator | Tuesday 01 April 2025 20:04:03 +0000 (0:00:03.225) 0:00:26.035 ********* 2025-04-01 20:09:00.571371 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-01 20:09:00.571385 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-01 20:09:00.571398 | orchestrator | 2025-04-01 20:09:00.571413 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-04-01 20:09:00.571426 | orchestrator | Tuesday 01 April 2025 20:04:11 +0000 (0:00:07.553) 0:00:33.588 ********* 2025-04-01 20:09:00.571441 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-04-01 20:09:00.571454 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-04-01 20:09:00.571469 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-04-01 20:09:00.571483 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-04-01 20:09:00.571496 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-04-01 20:09:00.571510 | orchestrator | 2025-04-01 20:09:00.571524 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-01 20:09:00.571538 | orchestrator | Tuesday 01 April 2025 20:04:27 +0000 (0:00:15.985) 0:00:49.573 ********* 2025-04-01 20:09:00.571553 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:09:00.571567 | orchestrator | 2025-04-01 20:09:00.571581 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-04-01 20:09:00.571595 | orchestrator | Tuesday 01 April 2025 20:04:28 +0000 (0:00:01.068) 0:00:50.642 ********* 2025-04-01 20:09:00.571609 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.571623 | orchestrator | 2025-04-01 20:09:00.571637 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-04-01 20:09:00.571651 | orchestrator | Tuesday 01 April 2025 20:04:59 +0000 (0:00:31.329) 0:01:21.972 ********* 2025-04-01 20:09:00.571665 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.571679 | orchestrator | 2025-04-01 20:09:00.571693 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-04-01 20:09:00.571842 | orchestrator | Tuesday 01 April 2025 20:05:04 +0000 (0:00:04.707) 0:01:26.679 ********* 2025-04-01 20:09:00.571865 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.571880 | orchestrator | 2025-04-01 20:09:00.571894 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-04-01 20:09:00.571909 | orchestrator | Tuesday 01 April 2025 20:05:07 +0000 (0:00:03.387) 0:01:30.067 ********* 2025-04-01 20:09:00.571923 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-04-01 20:09:00.571937 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-04-01 20:09:00.571951 | orchestrator | 2025-04-01 20:09:00.571965 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-04-01 20:09:00.571979 | orchestrator | Tuesday 01 April 2025 20:05:16 +0000 (0:00:09.145) 0:01:39.212 ********* 2025-04-01 20:09:00.571993 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-04-01 20:09:00.572007 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-04-01 20:09:00.572023 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-04-01 20:09:00.572046 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-04-01 20:09:00.572060 | orchestrator | 2025-04-01 20:09:00.572075 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-04-01 20:09:00.572089 | orchestrator | Tuesday 01 April 2025 20:05:32 +0000 (0:00:15.532) 0:01:54.744 ********* 2025-04-01 20:09:00.572103 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572117 | orchestrator | 2025-04-01 20:09:00.572131 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-04-01 20:09:00.572145 | orchestrator | Tuesday 01 April 2025 20:05:36 +0000 (0:00:03.888) 0:01:58.633 ********* 2025-04-01 20:09:00.572159 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572174 | orchestrator | 2025-04-01 20:09:00.572188 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-04-01 20:09:00.572202 | orchestrator | Tuesday 01 April 2025 20:05:41 +0000 (0:00:05.085) 0:02:03.719 ********* 2025-04-01 20:09:00.572216 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.572231 | orchestrator | 2025-04-01 20:09:00.572244 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-04-01 20:09:00.572258 | orchestrator | Tuesday 01 April 2025 20:05:41 +0000 (0:00:00.350) 0:02:04.069 ********* 2025-04-01 20:09:00.572272 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572287 | orchestrator | 2025-04-01 20:09:00.572301 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-01 20:09:00.572315 | orchestrator | Tuesday 01 April 2025 20:05:46 +0000 (0:00:04.718) 0:02:08.787 ********* 2025-04-01 20:09:00.572329 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-04-01 20:09:00.572344 | orchestrator | 2025-04-01 20:09:00.572358 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-04-01 20:09:00.572373 | orchestrator | Tuesday 01 April 2025 20:05:48 +0000 (0:00:02.318) 0:02:11.106 ********* 2025-04-01 20:09:00.572392 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.572407 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.572421 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572435 | orchestrator | 2025-04-01 20:09:00.572450 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-04-01 20:09:00.572464 | orchestrator | Tuesday 01 April 2025 20:05:54 +0000 (0:00:05.694) 0:02:16.801 ********* 2025-04-01 20:09:00.572478 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572492 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.572519 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.572533 | orchestrator | 2025-04-01 20:09:00.572547 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-04-01 20:09:00.572561 | orchestrator | Tuesday 01 April 2025 20:05:59 +0000 (0:00:04.714) 0:02:21.516 ********* 2025-04-01 20:09:00.572575 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572589 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.572603 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.572618 | orchestrator | 2025-04-01 20:09:00.572632 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-04-01 20:09:00.572646 | orchestrator | Tuesday 01 April 2025 20:06:00 +0000 (0:00:00.990) 0:02:22.506 ********* 2025-04-01 20:09:00.572660 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:09:00.572680 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:09:00.572694 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.572708 | orchestrator | 2025-04-01 20:09:00.572722 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-04-01 20:09:00.572737 | orchestrator | Tuesday 01 April 2025 20:06:02 +0000 (0:00:02.033) 0:02:24.540 ********* 2025-04-01 20:09:00.572751 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.572765 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572779 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.572793 | orchestrator | 2025-04-01 20:09:00.572807 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-04-01 20:09:00.572839 | orchestrator | Tuesday 01 April 2025 20:06:03 +0000 (0:00:01.526) 0:02:26.067 ********* 2025-04-01 20:09:00.572854 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572869 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.572883 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.572897 | orchestrator | 2025-04-01 20:09:00.572910 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-04-01 20:09:00.572925 | orchestrator | Tuesday 01 April 2025 20:06:05 +0000 (0:00:01.890) 0:02:27.957 ********* 2025-04-01 20:09:00.572939 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.572953 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.572967 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.572982 | orchestrator | 2025-04-01 20:09:00.573038 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-04-01 20:09:00.573054 | orchestrator | Tuesday 01 April 2025 20:06:07 +0000 (0:00:02.072) 0:02:30.030 ********* 2025-04-01 20:09:00.573069 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.573083 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.573097 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.573111 | orchestrator | 2025-04-01 20:09:00.573125 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-04-01 20:09:00.573139 | orchestrator | Tuesday 01 April 2025 20:06:09 +0000 (0:00:01.591) 0:02:31.621 ********* 2025-04-01 20:09:00.573154 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.573167 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:09:00.573182 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:09:00.573195 | orchestrator | 2025-04-01 20:09:00.573210 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-04-01 20:09:00.573224 | orchestrator | Tuesday 01 April 2025 20:06:10 +0000 (0:00:00.674) 0:02:32.295 ********* 2025-04-01 20:09:00.573238 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:09:00.573252 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.573266 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:09:00.573280 | orchestrator | 2025-04-01 20:09:00.573294 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-01 20:09:00.573308 | orchestrator | Tuesday 01 April 2025 20:06:13 +0000 (0:00:03.703) 0:02:35.999 ********* 2025-04-01 20:09:00.573322 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:09:00.573337 | orchestrator | 2025-04-01 20:09:00.573351 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-04-01 20:09:00.573371 | orchestrator | Tuesday 01 April 2025 20:06:14 +0000 (0:00:00.925) 0:02:36.925 ********* 2025-04-01 20:09:00.573385 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.573400 | orchestrator | 2025-04-01 20:09:00.573414 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-04-01 20:09:00.573428 | orchestrator | Tuesday 01 April 2025 20:06:18 +0000 (0:00:03.975) 0:02:40.900 ********* 2025-04-01 20:09:00.573442 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.573456 | orchestrator | 2025-04-01 20:09:00.573470 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-04-01 20:09:00.573483 | orchestrator | Tuesday 01 April 2025 20:06:21 +0000 (0:00:02.872) 0:02:43.773 ********* 2025-04-01 20:09:00.573497 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-04-01 20:09:00.573512 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-04-01 20:09:00.573526 | orchestrator | 2025-04-01 20:09:00.573540 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-04-01 20:09:00.573554 | orchestrator | Tuesday 01 April 2025 20:06:28 +0000 (0:00:06.566) 0:02:50.339 ********* 2025-04-01 20:09:00.573568 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.573582 | orchestrator | 2025-04-01 20:09:00.573597 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-04-01 20:09:00.573611 | orchestrator | Tuesday 01 April 2025 20:06:32 +0000 (0:00:03.959) 0:02:54.299 ********* 2025-04-01 20:09:00.573624 | orchestrator | ok: [testbed-node-0] 2025-04-01 20:09:00.573639 | orchestrator | ok: [testbed-node-1] 2025-04-01 20:09:00.573652 | orchestrator | ok: [testbed-node-2] 2025-04-01 20:09:00.573667 | orchestrator | 2025-04-01 20:09:00.573686 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-04-01 20:09:00.573700 | orchestrator | Tuesday 01 April 2025 20:06:32 +0000 (0:00:00.561) 0:02:54.860 ********* 2025-04-01 20:09:00.573716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.573772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.573789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.573812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.573876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.573892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.573907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.573923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.573976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.574160 | orchestrator | 2025-04-01 20:09:00.574211 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-04-01 20:09:00.574228 | orchestrator | Tuesday 01 April 2025 20:06:35 +0000 (0:00:03.368) 0:02:58.229 ********* 2025-04-01 20:09:00.574242 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.574257 | orchestrator | 2025-04-01 20:09:00.574271 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-04-01 20:09:00.574285 | orchestrator | Tuesday 01 April 2025 20:06:36 +0000 (0:00:00.143) 0:02:58.373 ********* 2025-04-01 20:09:00.574299 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.574314 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:09:00.574328 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:09:00.574342 | orchestrator | 2025-04-01 20:09:00.574356 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-04-01 20:09:00.574370 | orchestrator | Tuesday 01 April 2025 20:06:36 +0000 (0:00:00.511) 0:02:58.884 ********* 2025-04-01 20:09:00.574385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.574399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.574414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.574429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.574444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.574466 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.574513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.574530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.574545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.574560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.574575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.574590 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:09:00.574634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.574658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.574673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.574688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.574703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.574717 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:09:00.574732 | orchestrator | 2025-04-01 20:09:00.574749 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-01 20:09:00.574764 | orchestrator | Tuesday 01 April 2025 20:06:37 +0000 (0:00:01.270) 0:03:00.155 ********* 2025-04-01 20:09:00.574778 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-01 20:09:00.574794 | orchestrator | 2025-04-01 20:09:00.574808 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-04-01 20:09:00.574842 | orchestrator | Tuesday 01 April 2025 20:06:38 +0000 (0:00:00.690) 0:03:00.845 ********* 2025-04-01 20:09:00.574865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.574918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.574935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.574950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.574965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.574980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.575002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.575158 | orchestrator | 2025-04-01 20:09:00.575173 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-04-01 20:09:00.575187 | orchestrator | Tuesday 01 April 2025 20:06:43 +0000 (0:00:04.914) 0:03:05.759 ********* 2025-04-01 20:09:00.575202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.575217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.575232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.575289 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.575305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.575320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.575335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.575385 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:09:00.575407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.575422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.575437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.575487 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:09:00.575502 | orchestrator | 2025-04-01 20:09:00.575516 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-04-01 20:09:00.575531 | orchestrator | Tuesday 01 April 2025 20:06:44 +0000 (0:00:00.901) 0:03:06.661 ********* 2025-04-01 20:09:00.575545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.575567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.575582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.575636 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.575651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.575666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.575687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.575732 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:09:00.575747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-01 20:09:00.575768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-01 20:09:00.575783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-01 20:09:00.575873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-01 20:09:00.575893 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:09:00.575907 | orchestrator | 2025-04-01 20:09:00.575922 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-04-01 20:09:00.575936 | orchestrator | Tuesday 01 April 2025 20:06:45 +0000 (0:00:01.336) 0:03:07.997 ********* 2025-04-01 20:09:00.575952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.575981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.575996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.576017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.576033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.576048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.576069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576215 | orchestrator | 2025-04-01 20:09:00.576229 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-04-01 20:09:00.576244 | orchestrator | Tuesday 01 April 2025 20:06:50 +0000 (0:00:04.956) 0:03:12.954 ********* 2025-04-01 20:09:00.576259 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-04-01 20:09:00.576273 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-04-01 20:09:00.576288 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-04-01 20:09:00.576302 | orchestrator | 2025-04-01 20:09:00.576316 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-04-01 20:09:00.576331 | orchestrator | Tuesday 01 April 2025 20:06:53 +0000 (0:00:02.989) 0:03:15.943 ********* 2025-04-01 20:09:00.576346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.576368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.576389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.576403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.576416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.576429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.576442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.576583 | orchestrator | 2025-04-01 20:09:00.576596 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-04-01 20:09:00.576608 | orchestrator | Tuesday 01 April 2025 20:07:15 +0000 (0:00:22.131) 0:03:38.075 ********* 2025-04-01 20:09:00.576621 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.576634 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.576647 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.576659 | orchestrator | 2025-04-01 20:09:00.576672 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-04-01 20:09:00.576684 | orchestrator | Tuesday 01 April 2025 20:07:18 +0000 (0:00:02.474) 0:03:40.549 ********* 2025-04-01 20:09:00.576697 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.576710 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.576722 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.576735 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.576747 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.576760 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.576772 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.576785 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.576797 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.576810 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-04-01 20:09:00.576838 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-04-01 20:09:00.576852 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-04-01 20:09:00.576864 | orchestrator | 2025-04-01 20:09:00.576877 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-04-01 20:09:00.576889 | orchestrator | Tuesday 01 April 2025 20:07:27 +0000 (0:00:09.025) 0:03:49.575 ********* 2025-04-01 20:09:00.576902 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.576914 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.576934 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.576946 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.576959 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.576971 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.576984 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.576996 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.577009 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.577021 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-04-01 20:09:00.577034 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-04-01 20:09:00.577046 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-04-01 20:09:00.577059 | orchestrator | 2025-04-01 20:09:00.577072 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-04-01 20:09:00.577090 | orchestrator | Tuesday 01 April 2025 20:07:33 +0000 (0:00:06.591) 0:03:56.166 ********* 2025-04-01 20:09:00.577103 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.577115 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.577128 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-04-01 20:09:00.577140 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.577153 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.577165 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-04-01 20:09:00.577177 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.577190 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.577202 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-04-01 20:09:00.577215 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-04-01 20:09:00.577227 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-04-01 20:09:00.577239 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-04-01 20:09:00.577252 | orchestrator | 2025-04-01 20:09:00.577265 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-04-01 20:09:00.577282 | orchestrator | Tuesday 01 April 2025 20:07:42 +0000 (0:00:08.262) 0:04:04.429 ********* 2025-04-01 20:09:00.577295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.577309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.577322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-01 20:09:00.577341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.577355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.577378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-04-01 20:09:00.577392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-04-01 20:09:00.577522 | orchestrator | 2025-04-01 20:09:00.577534 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-01 20:09:00.577547 | orchestrator | Tuesday 01 April 2025 20:07:46 +0000 (0:00:04.245) 0:04:08.675 ********* 2025-04-01 20:09:00.577560 | orchestrator | skipping: [testbed-node-0] 2025-04-01 20:09:00.577573 | orchestrator | skipping: [testbed-node-1] 2025-04-01 20:09:00.577591 | orchestrator | skipping: [testbed-node-2] 2025-04-01 20:09:00.577604 | orchestrator | 2025-04-01 20:09:00.577616 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-04-01 20:09:00.577628 | orchestrator | Tuesday 01 April 2025 20:07:46 +0000 (0:00:00.333) 0:04:09.009 ********* 2025-04-01 20:09:00.577641 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.577654 | orchestrator | 2025-04-01 20:09:00.577666 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-04-01 20:09:00.577679 | orchestrator | Tuesday 01 April 2025 20:07:48 +0000 (0:00:01.808) 0:04:10.818 ********* 2025-04-01 20:09:00.577691 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.577704 | orchestrator | 2025-04-01 20:09:00.577716 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-04-01 20:09:00.577729 | orchestrator | Tuesday 01 April 2025 20:07:50 +0000 (0:00:02.390) 0:04:13.209 ********* 2025-04-01 20:09:00.577741 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.577754 | orchestrator | 2025-04-01 20:09:00.577766 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-04-01 20:09:00.577779 | orchestrator | Tuesday 01 April 2025 20:07:53 +0000 (0:00:02.801) 0:04:16.011 ********* 2025-04-01 20:09:00.577791 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.577804 | orchestrator | 2025-04-01 20:09:00.577991 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-04-01 20:09:00.578008 | orchestrator | Tuesday 01 April 2025 20:07:55 +0000 (0:00:02.102) 0:04:18.113 ********* 2025-04-01 20:09:00.578049 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.578065 | orchestrator | 2025-04-01 20:09:00.578077 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-04-01 20:09:00.578090 | orchestrator | Tuesday 01 April 2025 20:08:12 +0000 (0:00:16.727) 0:04:34.840 ********* 2025-04-01 20:09:00.578103 | orchestrator | 2025-04-01 20:09:00.578115 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-04-01 20:09:00.578128 | orchestrator | Tuesday 01 April 2025 20:08:12 +0000 (0:00:00.231) 0:04:35.072 ********* 2025-04-01 20:09:00.578140 | orchestrator | 2025-04-01 20:09:00.578153 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-04-01 20:09:00.578165 | orchestrator | Tuesday 01 April 2025 20:08:12 +0000 (0:00:00.064) 0:04:35.136 ********* 2025-04-01 20:09:00.578178 | orchestrator | 2025-04-01 20:09:00.578190 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-04-01 20:09:00.578203 | orchestrator | Tuesday 01 April 2025 20:08:12 +0000 (0:00:00.062) 0:04:35.199 ********* 2025-04-01 20:09:00.578215 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.578229 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.578262 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.578276 | orchestrator | 2025-04-01 20:09:00.578288 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-04-01 20:09:00.578301 | orchestrator | Tuesday 01 April 2025 20:08:25 +0000 (0:00:12.156) 0:04:47.356 ********* 2025-04-01 20:09:00.578314 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:00.578327 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:00.578339 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:00.578352 | orchestrator | 2025-04-01 20:09:00.578371 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-04-01 20:09:03.622142 | orchestrator | Tuesday 01 April 2025 20:08:33 +0000 (0:00:08.533) 0:04:55.889 ********* 2025-04-01 20:09:03.622259 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:03.622279 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:03.622293 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:03.622305 | orchestrator | 2025-04-01 20:09:03.622319 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-04-01 20:09:03.622331 | orchestrator | Tuesday 01 April 2025 20:08:43 +0000 (0:00:10.076) 0:05:05.965 ********* 2025-04-01 20:09:03.622345 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:03.622357 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:03.622396 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:03.622409 | orchestrator | 2025-04-01 20:09:03.622423 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-04-01 20:09:03.622436 | orchestrator | Tuesday 01 April 2025 20:08:53 +0000 (0:00:10.099) 0:05:16.065 ********* 2025-04-01 20:09:03.622448 | orchestrator | changed: [testbed-node-0] 2025-04-01 20:09:03.622460 | orchestrator | changed: [testbed-node-1] 2025-04-01 20:09:03.622473 | orchestrator | changed: [testbed-node-2] 2025-04-01 20:09:03.622486 | orchestrator | 2025-04-01 20:09:03.622498 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-01 20:09:03.622511 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-01 20:09:03.622526 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 20:09:03.622538 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-01 20:09:03.622550 | orchestrator | 2025-04-01 20:09:03.622563 | orchestrator | 2025-04-01 20:09:03.622576 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-01 20:09:03.622588 | orchestrator | Tuesday 01 April 2025 20:08:59 +0000 (0:00:05.392) 0:05:21.457 ********* 2025-04-01 20:09:03.622601 | orchestrator | =============================================================================== 2025-04-01 20:09:03.622628 | orchestrator | octavia : Create amphora flavor ---------------------------------------- 31.33s 2025-04-01 20:09:03.622642 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 22.13s 2025-04-01 20:09:03.622656 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 16.73s 2025-04-01 20:09:03.622670 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.99s 2025-04-01 20:09:03.622684 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.53s 2025-04-01 20:09:03.622698 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.16s 2025-04-01 20:09:03.622712 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.10s 2025-04-01 20:09:03.622726 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.08s 2025-04-01 20:09:03.622740 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.15s 2025-04-01 20:09:03.622754 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 9.03s 2025-04-01 20:09:03.622768 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.53s 2025-04-01 20:09:03.622782 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 8.26s 2025-04-01 20:09:03.622796 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.71s 2025-04-01 20:09:03.622811 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.55s 2025-04-01 20:09:03.622851 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.59s 2025-04-01 20:09:03.622866 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.57s 2025-04-01 20:09:03.622880 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.82s 2025-04-01 20:09:03.622894 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.69s 2025-04-01 20:09:03.622907 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.39s 2025-04-01 20:09:03.622921 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.09s 2025-04-01 20:09:03.622935 | orchestrator | 2025-04-01 20:09:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:03.622969 | orchestrator | 2025-04-01 20:09:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:06.677381 | orchestrator | 2025-04-01 20:09:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:06.677520 | orchestrator | 2025-04-01 20:09:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:09.727328 | orchestrator | 2025-04-01 20:09:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:09.727459 | orchestrator | 2025-04-01 20:09:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:12.779814 | orchestrator | 2025-04-01 20:09:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:12.780011 | orchestrator | 2025-04-01 20:09:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:15.834837 | orchestrator | 2025-04-01 20:09:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:15.835005 | orchestrator | 2025-04-01 20:09:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:18.889750 | orchestrator | 2025-04-01 20:09:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:18.889904 | orchestrator | 2025-04-01 20:09:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:21.931719 | orchestrator | 2025-04-01 20:09:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:21.931881 | orchestrator | 2025-04-01 20:09:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:24.977672 | orchestrator | 2025-04-01 20:09:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:24.977795 | orchestrator | 2025-04-01 20:09:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:28.033101 | orchestrator | 2025-04-01 20:09:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:28.033237 | orchestrator | 2025-04-01 20:09:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:31.083387 | orchestrator | 2025-04-01 20:09:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:31.083516 | orchestrator | 2025-04-01 20:09:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:34.122397 | orchestrator | 2025-04-01 20:09:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:34.122547 | orchestrator | 2025-04-01 20:09:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:37.170664 | orchestrator | 2025-04-01 20:09:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:37.170799 | orchestrator | 2025-04-01 20:09:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:40.221690 | orchestrator | 2025-04-01 20:09:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:40.221819 | orchestrator | 2025-04-01 20:09:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:43.269278 | orchestrator | 2025-04-01 20:09:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:43.269400 | orchestrator | 2025-04-01 20:09:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:46.321256 | orchestrator | 2025-04-01 20:09:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:46.321354 | orchestrator | 2025-04-01 20:09:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:49.364225 | orchestrator | 2025-04-01 20:09:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:49.364366 | orchestrator | 2025-04-01 20:09:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:52.407186 | orchestrator | 2025-04-01 20:09:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:52.407319 | orchestrator | 2025-04-01 20:09:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:55.459908 | orchestrator | 2025-04-01 20:09:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:55.460039 | orchestrator | 2025-04-01 20:09:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:09:58.505793 | orchestrator | 2025-04-01 20:09:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:09:58.505943 | orchestrator | 2025-04-01 20:09:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:01.558258 | orchestrator | 2025-04-01 20:09:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:01.558383 | orchestrator | 2025-04-01 20:10:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:04.613363 | orchestrator | 2025-04-01 20:10:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:04.613526 | orchestrator | 2025-04-01 20:10:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:07.665805 | orchestrator | 2025-04-01 20:10:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:07.665958 | orchestrator | 2025-04-01 20:10:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:10.714960 | orchestrator | 2025-04-01 20:10:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:10.715092 | orchestrator | 2025-04-01 20:10:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:13.762784 | orchestrator | 2025-04-01 20:10:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:13.762963 | orchestrator | 2025-04-01 20:10:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:16.809928 | orchestrator | 2025-04-01 20:10:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:16.810095 | orchestrator | 2025-04-01 20:10:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:19.861064 | orchestrator | 2025-04-01 20:10:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:19.861197 | orchestrator | 2025-04-01 20:10:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:22.903711 | orchestrator | 2025-04-01 20:10:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:22.903871 | orchestrator | 2025-04-01 20:10:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:25.949417 | orchestrator | 2025-04-01 20:10:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:25.949571 | orchestrator | 2025-04-01 20:10:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:29.005388 | orchestrator | 2025-04-01 20:10:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:29.005518 | orchestrator | 2025-04-01 20:10:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:32.049728 | orchestrator | 2025-04-01 20:10:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:32.049884 | orchestrator | 2025-04-01 20:10:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:35.098232 | orchestrator | 2025-04-01 20:10:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:35.098367 | orchestrator | 2025-04-01 20:10:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:38.148302 | orchestrator | 2025-04-01 20:10:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:38.148430 | orchestrator | 2025-04-01 20:10:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:41.191914 | orchestrator | 2025-04-01 20:10:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:41.192040 | orchestrator | 2025-04-01 20:10:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:44.238353 | orchestrator | 2025-04-01 20:10:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:44.238485 | orchestrator | 2025-04-01 20:10:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:47.293423 | orchestrator | 2025-04-01 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:47.293559 | orchestrator | 2025-04-01 20:10:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:50.342665 | orchestrator | 2025-04-01 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:50.342790 | orchestrator | 2025-04-01 20:10:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:53.382832 | orchestrator | 2025-04-01 20:10:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:53.383013 | orchestrator | 2025-04-01 20:10:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:56.429640 | orchestrator | 2025-04-01 20:10:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:56.429766 | orchestrator | 2025-04-01 20:10:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:10:59.469050 | orchestrator | 2025-04-01 20:10:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:10:59.469177 | orchestrator | 2025-04-01 20:10:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:02.518357 | orchestrator | 2025-04-01 20:10:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:02.518479 | orchestrator | 2025-04-01 20:11:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:05.570236 | orchestrator | 2025-04-01 20:11:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:05.570379 | orchestrator | 2025-04-01 20:11:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:08.617889 | orchestrator | 2025-04-01 20:11:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:08.618067 | orchestrator | 2025-04-01 20:11:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:11.668685 | orchestrator | 2025-04-01 20:11:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:11.668833 | orchestrator | 2025-04-01 20:11:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:14.724768 | orchestrator | 2025-04-01 20:11:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:14.724946 | orchestrator | 2025-04-01 20:11:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:17.780550 | orchestrator | 2025-04-01 20:11:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:17.780691 | orchestrator | 2025-04-01 20:11:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:20.833504 | orchestrator | 2025-04-01 20:11:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:20.833628 | orchestrator | 2025-04-01 20:11:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:23.891760 | orchestrator | 2025-04-01 20:11:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:23.891927 | orchestrator | 2025-04-01 20:11:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:26.941130 | orchestrator | 2025-04-01 20:11:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:26.941273 | orchestrator | 2025-04-01 20:11:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:29.992454 | orchestrator | 2025-04-01 20:11:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:29.992583 | orchestrator | 2025-04-01 20:11:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:33.042331 | orchestrator | 2025-04-01 20:11:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:33.042445 | orchestrator | 2025-04-01 20:11:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:36.089651 | orchestrator | 2025-04-01 20:11:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:36.089780 | orchestrator | 2025-04-01 20:11:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:39.137338 | orchestrator | 2025-04-01 20:11:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:39.137464 | orchestrator | 2025-04-01 20:11:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:42.189013 | orchestrator | 2025-04-01 20:11:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:42.189144 | orchestrator | 2025-04-01 20:11:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:45.235540 | orchestrator | 2025-04-01 20:11:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:45.235724 | orchestrator | 2025-04-01 20:11:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:48.279398 | orchestrator | 2025-04-01 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:48.279531 | orchestrator | 2025-04-01 20:11:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:51.330285 | orchestrator | 2025-04-01 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:51.330426 | orchestrator | 2025-04-01 20:11:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:54.383382 | orchestrator | 2025-04-01 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:54.383513 | orchestrator | 2025-04-01 20:11:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:11:57.432915 | orchestrator | 2025-04-01 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:11:57.433057 | orchestrator | 2025-04-01 20:11:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:00.492964 | orchestrator | 2025-04-01 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:00.493094 | orchestrator | 2025-04-01 20:12:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:03.543660 | orchestrator | 2025-04-01 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:03.543789 | orchestrator | 2025-04-01 20:12:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:06.596904 | orchestrator | 2025-04-01 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:06.597035 | orchestrator | 2025-04-01 20:12:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:09.653707 | orchestrator | 2025-04-01 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:09.653839 | orchestrator | 2025-04-01 20:12:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:12.692574 | orchestrator | 2025-04-01 20:12:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:12.692702 | orchestrator | 2025-04-01 20:12:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:15.744893 | orchestrator | 2025-04-01 20:12:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:15.745036 | orchestrator | 2025-04-01 20:12:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:18.788373 | orchestrator | 2025-04-01 20:12:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:18.788509 | orchestrator | 2025-04-01 20:12:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:21.838400 | orchestrator | 2025-04-01 20:12:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:21.838530 | orchestrator | 2025-04-01 20:12:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:24.890080 | orchestrator | 2025-04-01 20:12:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:24.890218 | orchestrator | 2025-04-01 20:12:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:27.933553 | orchestrator | 2025-04-01 20:12:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:27.933678 | orchestrator | 2025-04-01 20:12:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:30.976591 | orchestrator | 2025-04-01 20:12:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:30.976721 | orchestrator | 2025-04-01 20:12:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:34.022252 | orchestrator | 2025-04-01 20:12:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:34.022379 | orchestrator | 2025-04-01 20:12:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:37.070399 | orchestrator | 2025-04-01 20:12:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:37.070529 | orchestrator | 2025-04-01 20:12:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:37.070690 | orchestrator | 2025-04-01 20:12:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:40.123066 | orchestrator | 2025-04-01 20:12:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:43.172608 | orchestrator | 2025-04-01 20:12:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:43.172733 | orchestrator | 2025-04-01 20:12:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:46.223828 | orchestrator | 2025-04-01 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:46.223970 | orchestrator | 2025-04-01 20:12:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:49.266724 | orchestrator | 2025-04-01 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:49.266937 | orchestrator | 2025-04-01 20:12:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:52.312264 | orchestrator | 2025-04-01 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:52.312402 | orchestrator | 2025-04-01 20:12:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:55.369405 | orchestrator | 2025-04-01 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:55.369532 | orchestrator | 2025-04-01 20:12:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:12:58.418636 | orchestrator | 2025-04-01 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:12:58.418782 | orchestrator | 2025-04-01 20:12:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:01.453592 | orchestrator | 2025-04-01 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:01.453700 | orchestrator | 2025-04-01 20:13:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:04.505785 | orchestrator | 2025-04-01 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:04.505970 | orchestrator | 2025-04-01 20:13:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:07.560180 | orchestrator | 2025-04-01 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:07.560303 | orchestrator | 2025-04-01 20:13:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:10.616689 | orchestrator | 2025-04-01 20:13:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:10.616886 | orchestrator | 2025-04-01 20:13:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:13.670227 | orchestrator | 2025-04-01 20:13:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:13.670356 | orchestrator | 2025-04-01 20:13:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:16.729603 | orchestrator | 2025-04-01 20:13:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:16.729742 | orchestrator | 2025-04-01 20:13:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:19.777048 | orchestrator | 2025-04-01 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:19.777181 | orchestrator | 2025-04-01 20:13:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:22.826725 | orchestrator | 2025-04-01 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:22.826934 | orchestrator | 2025-04-01 20:13:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:25.890186 | orchestrator | 2025-04-01 20:13:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:25.890319 | orchestrator | 2025-04-01 20:13:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:28.937479 | orchestrator | 2025-04-01 20:13:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:28.937611 | orchestrator | 2025-04-01 20:13:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:31.980542 | orchestrator | 2025-04-01 20:13:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:31.980673 | orchestrator | 2025-04-01 20:13:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:35.023674 | orchestrator | 2025-04-01 20:13:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:35.023804 | orchestrator | 2025-04-01 20:13:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:38.064568 | orchestrator | 2025-04-01 20:13:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:38.064712 | orchestrator | 2025-04-01 20:13:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:41.115336 | orchestrator | 2025-04-01 20:13:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:41.115470 | orchestrator | 2025-04-01 20:13:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:44.163264 | orchestrator | 2025-04-01 20:13:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:44.163400 | orchestrator | 2025-04-01 20:13:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:47.216686 | orchestrator | 2025-04-01 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:47.216868 | orchestrator | 2025-04-01 20:13:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:50.271746 | orchestrator | 2025-04-01 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:50.271957 | orchestrator | 2025-04-01 20:13:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:53.334724 | orchestrator | 2025-04-01 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:53.334952 | orchestrator | 2025-04-01 20:13:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:56.386334 | orchestrator | 2025-04-01 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:56.386500 | orchestrator | 2025-04-01 20:13:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:13:59.445566 | orchestrator | 2025-04-01 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:13:59.445754 | orchestrator | 2025-04-01 20:13:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:02.508032 | orchestrator | 2025-04-01 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:02.508196 | orchestrator | 2025-04-01 20:14:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:02.509642 | orchestrator | 2025-04-01 20:14:02 | INFO  | Task 9e885d0e-4ce1-46d7-bedf-79798ae427ec is in state STARTED 2025-04-01 20:14:02.509753 | orchestrator | 2025-04-01 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:05.570722 | orchestrator | 2025-04-01 20:14:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:08.619674 | orchestrator | 2025-04-01 20:14:05 | INFO  | Task 9e885d0e-4ce1-46d7-bedf-79798ae427ec is in state STARTED 2025-04-01 20:14:08.619790 | orchestrator | 2025-04-01 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:08.619857 | orchestrator | 2025-04-01 20:14:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:08.622838 | orchestrator | 2025-04-01 20:14:08 | INFO  | Task 9e885d0e-4ce1-46d7-bedf-79798ae427ec is in state STARTED 2025-04-01 20:14:11.677450 | orchestrator | 2025-04-01 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:11.678256 | orchestrator | 2025-04-01 20:14:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:14.745096 | orchestrator | 2025-04-01 20:14:11 | INFO  | Task 9e885d0e-4ce1-46d7-bedf-79798ae427ec is in state STARTED 2025-04-01 20:14:14.745215 | orchestrator | 2025-04-01 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:14.745253 | orchestrator | 2025-04-01 20:14:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:14.746975 | orchestrator | 2025-04-01 20:14:14 | INFO  | Task 9e885d0e-4ce1-46d7-bedf-79798ae427ec is in state SUCCESS 2025-04-01 20:14:17.796549 | orchestrator | 2025-04-01 20:14:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:17.796672 | orchestrator | 2025-04-01 20:14:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:20.850575 | orchestrator | 2025-04-01 20:14:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:20.850715 | orchestrator | 2025-04-01 20:14:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:23.897273 | orchestrator | 2025-04-01 20:14:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:23.897406 | orchestrator | 2025-04-01 20:14:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:26.942294 | orchestrator | 2025-04-01 20:14:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:26.942429 | orchestrator | 2025-04-01 20:14:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:29.987735 | orchestrator | 2025-04-01 20:14:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:29.987960 | orchestrator | 2025-04-01 20:14:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:33.035730 | orchestrator | 2025-04-01 20:14:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:33.035955 | orchestrator | 2025-04-01 20:14:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:36.083364 | orchestrator | 2025-04-01 20:14:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:36.083531 | orchestrator | 2025-04-01 20:14:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:39.130917 | orchestrator | 2025-04-01 20:14:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:39.131084 | orchestrator | 2025-04-01 20:14:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:42.177919 | orchestrator | 2025-04-01 20:14:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:42.178147 | orchestrator | 2025-04-01 20:14:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:45.217518 | orchestrator | 2025-04-01 20:14:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:45.217697 | orchestrator | 2025-04-01 20:14:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:48.263907 | orchestrator | 2025-04-01 20:14:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:48.264060 | orchestrator | 2025-04-01 20:14:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:51.308340 | orchestrator | 2025-04-01 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:51.308519 | orchestrator | 2025-04-01 20:14:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:54.356074 | orchestrator | 2025-04-01 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:54.356218 | orchestrator | 2025-04-01 20:14:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:14:57.407273 | orchestrator | 2025-04-01 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:14:57.407401 | orchestrator | 2025-04-01 20:14:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:00.450777 | orchestrator | 2025-04-01 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:00.450956 | orchestrator | 2025-04-01 20:15:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:03.496669 | orchestrator | 2025-04-01 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:03.496863 | orchestrator | 2025-04-01 20:15:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:06.548741 | orchestrator | 2025-04-01 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:06.548983 | orchestrator | 2025-04-01 20:15:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:09.598183 | orchestrator | 2025-04-01 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:09.598355 | orchestrator | 2025-04-01 20:15:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:12.651097 | orchestrator | 2025-04-01 20:15:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:12.651277 | orchestrator | 2025-04-01 20:15:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:15.706264 | orchestrator | 2025-04-01 20:15:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:15.706433 | orchestrator | 2025-04-01 20:15:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:18.756683 | orchestrator | 2025-04-01 20:15:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:18.756884 | orchestrator | 2025-04-01 20:15:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:21.806276 | orchestrator | 2025-04-01 20:15:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:21.806459 | orchestrator | 2025-04-01 20:15:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:24.858728 | orchestrator | 2025-04-01 20:15:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:24.858936 | orchestrator | 2025-04-01 20:15:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:27.898230 | orchestrator | 2025-04-01 20:15:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:27.898388 | orchestrator | 2025-04-01 20:15:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:30.950110 | orchestrator | 2025-04-01 20:15:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:30.950284 | orchestrator | 2025-04-01 20:15:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:33.999695 | orchestrator | 2025-04-01 20:15:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:33.999896 | orchestrator | 2025-04-01 20:15:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:37.054703 | orchestrator | 2025-04-01 20:15:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:37.054883 | orchestrator | 2025-04-01 20:15:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:40.105386 | orchestrator | 2025-04-01 20:15:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:40.105530 | orchestrator | 2025-04-01 20:15:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:43.149706 | orchestrator | 2025-04-01 20:15:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:43.149893 | orchestrator | 2025-04-01 20:15:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:46.196094 | orchestrator | 2025-04-01 20:15:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:46.196229 | orchestrator | 2025-04-01 20:15:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:49.250591 | orchestrator | 2025-04-01 20:15:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:49.250749 | orchestrator | 2025-04-01 20:15:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:52.311002 | orchestrator | 2025-04-01 20:15:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:52.311144 | orchestrator | 2025-04-01 20:15:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:55.360283 | orchestrator | 2025-04-01 20:15:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:55.360417 | orchestrator | 2025-04-01 20:15:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:15:58.400608 | orchestrator | 2025-04-01 20:15:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:15:58.400739 | orchestrator | 2025-04-01 20:15:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:01.458838 | orchestrator | 2025-04-01 20:15:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:01.458967 | orchestrator | 2025-04-01 20:16:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:04.506586 | orchestrator | 2025-04-01 20:16:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:04.506731 | orchestrator | 2025-04-01 20:16:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:07.563613 | orchestrator | 2025-04-01 20:16:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:07.563740 | orchestrator | 2025-04-01 20:16:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:10.610175 | orchestrator | 2025-04-01 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:10.610324 | orchestrator | 2025-04-01 20:16:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:13.663909 | orchestrator | 2025-04-01 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:13.664034 | orchestrator | 2025-04-01 20:16:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:16.714160 | orchestrator | 2025-04-01 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:16.714276 | orchestrator | 2025-04-01 20:16:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:19.770260 | orchestrator | 2025-04-01 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:19.770397 | orchestrator | 2025-04-01 20:16:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:22.812679 | orchestrator | 2025-04-01 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:22.812893 | orchestrator | 2025-04-01 20:16:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:25.869491 | orchestrator | 2025-04-01 20:16:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:25.869621 | orchestrator | 2025-04-01 20:16:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:28.910633 | orchestrator | 2025-04-01 20:16:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:28.910846 | orchestrator | 2025-04-01 20:16:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:31.960247 | orchestrator | 2025-04-01 20:16:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:31.960377 | orchestrator | 2025-04-01 20:16:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:35.014315 | orchestrator | 2025-04-01 20:16:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:35.014439 | orchestrator | 2025-04-01 20:16:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:38.061595 | orchestrator | 2025-04-01 20:16:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:38.061716 | orchestrator | 2025-04-01 20:16:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:41.122485 | orchestrator | 2025-04-01 20:16:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:41.122595 | orchestrator | 2025-04-01 20:16:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:44.170693 | orchestrator | 2025-04-01 20:16:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:44.170888 | orchestrator | 2025-04-01 20:16:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:47.220837 | orchestrator | 2025-04-01 20:16:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:47.221006 | orchestrator | 2025-04-01 20:16:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:50.265568 | orchestrator | 2025-04-01 20:16:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:50.265730 | orchestrator | 2025-04-01 20:16:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:53.323339 | orchestrator | 2025-04-01 20:16:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:53.323480 | orchestrator | 2025-04-01 20:16:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:56.373160 | orchestrator | 2025-04-01 20:16:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:56.373330 | orchestrator | 2025-04-01 20:16:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:16:59.427439 | orchestrator | 2025-04-01 20:16:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:16:59.427583 | orchestrator | 2025-04-01 20:16:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:02.475556 | orchestrator | 2025-04-01 20:16:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:02.475690 | orchestrator | 2025-04-01 20:17:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:05.528821 | orchestrator | 2025-04-01 20:17:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:05.528961 | orchestrator | 2025-04-01 20:17:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:08.581184 | orchestrator | 2025-04-01 20:17:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:08.581358 | orchestrator | 2025-04-01 20:17:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:11.629516 | orchestrator | 2025-04-01 20:17:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:11.629721 | orchestrator | 2025-04-01 20:17:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:14.676307 | orchestrator | 2025-04-01 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:14.676480 | orchestrator | 2025-04-01 20:17:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:17.730864 | orchestrator | 2025-04-01 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:17.731028 | orchestrator | 2025-04-01 20:17:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:20.775755 | orchestrator | 2025-04-01 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:20.775920 | orchestrator | 2025-04-01 20:17:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:23.820593 | orchestrator | 2025-04-01 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:23.820802 | orchestrator | 2025-04-01 20:17:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:26.871973 | orchestrator | 2025-04-01 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:26.872130 | orchestrator | 2025-04-01 20:17:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:29.923516 | orchestrator | 2025-04-01 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:29.923688 | orchestrator | 2025-04-01 20:17:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:32.973567 | orchestrator | 2025-04-01 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:32.973727 | orchestrator | 2025-04-01 20:17:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:36.024584 | orchestrator | 2025-04-01 20:17:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:36.024687 | orchestrator | 2025-04-01 20:17:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:39.073040 | orchestrator | 2025-04-01 20:17:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:39.073182 | orchestrator | 2025-04-01 20:17:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:42.115861 | orchestrator | 2025-04-01 20:17:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:42.116001 | orchestrator | 2025-04-01 20:17:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:45.161843 | orchestrator | 2025-04-01 20:17:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:45.161985 | orchestrator | 2025-04-01 20:17:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:48.207036 | orchestrator | 2025-04-01 20:17:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:48.207170 | orchestrator | 2025-04-01 20:17:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:51.255138 | orchestrator | 2025-04-01 20:17:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:51.255259 | orchestrator | 2025-04-01 20:17:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:54.306248 | orchestrator | 2025-04-01 20:17:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:54.306384 | orchestrator | 2025-04-01 20:17:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:17:57.346540 | orchestrator | 2025-04-01 20:17:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:17:57.346676 | orchestrator | 2025-04-01 20:17:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:00.393623 | orchestrator | 2025-04-01 20:17:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:00.393795 | orchestrator | 2025-04-01 20:18:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:03.438063 | orchestrator | 2025-04-01 20:18:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:03.438194 | orchestrator | 2025-04-01 20:18:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:06.491867 | orchestrator | 2025-04-01 20:18:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:06.492002 | orchestrator | 2025-04-01 20:18:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:09.545587 | orchestrator | 2025-04-01 20:18:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:09.545714 | orchestrator | 2025-04-01 20:18:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:12.602277 | orchestrator | 2025-04-01 20:18:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:12.602409 | orchestrator | 2025-04-01 20:18:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:15.650652 | orchestrator | 2025-04-01 20:18:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:15.650850 | orchestrator | 2025-04-01 20:18:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:18.705981 | orchestrator | 2025-04-01 20:18:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:18.706896 | orchestrator | 2025-04-01 20:18:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:21.756683 | orchestrator | 2025-04-01 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:21.756848 | orchestrator | 2025-04-01 20:18:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:24.805101 | orchestrator | 2025-04-01 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:24.805227 | orchestrator | 2025-04-01 20:18:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:27.854339 | orchestrator | 2025-04-01 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:27.854465 | orchestrator | 2025-04-01 20:18:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:30.902225 | orchestrator | 2025-04-01 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:30.902378 | orchestrator | 2025-04-01 20:18:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:33.947547 | orchestrator | 2025-04-01 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:33.947700 | orchestrator | 2025-04-01 20:18:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:36.999681 | orchestrator | 2025-04-01 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:36.999858 | orchestrator | 2025-04-01 20:18:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:40.054342 | orchestrator | 2025-04-01 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:40.054485 | orchestrator | 2025-04-01 20:18:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:43.097696 | orchestrator | 2025-04-01 20:18:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:43.097877 | orchestrator | 2025-04-01 20:18:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:46.140824 | orchestrator | 2025-04-01 20:18:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:46.140960 | orchestrator | 2025-04-01 20:18:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:49.191858 | orchestrator | 2025-04-01 20:18:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:49.191994 | orchestrator | 2025-04-01 20:18:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:52.244680 | orchestrator | 2025-04-01 20:18:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:52.244866 | orchestrator | 2025-04-01 20:18:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:55.301766 | orchestrator | 2025-04-01 20:18:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:55.301908 | orchestrator | 2025-04-01 20:18:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:18:58.352764 | orchestrator | 2025-04-01 20:18:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:18:58.352895 | orchestrator | 2025-04-01 20:18:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:01.401760 | orchestrator | 2025-04-01 20:18:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:01.401894 | orchestrator | 2025-04-01 20:19:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:04.456220 | orchestrator | 2025-04-01 20:19:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:04.456351 | orchestrator | 2025-04-01 20:19:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:07.499021 | orchestrator | 2025-04-01 20:19:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:07.499156 | orchestrator | 2025-04-01 20:19:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:10.552216 | orchestrator | 2025-04-01 20:19:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:10.552387 | orchestrator | 2025-04-01 20:19:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:13.609139 | orchestrator | 2025-04-01 20:19:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:13.609300 | orchestrator | 2025-04-01 20:19:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:16.660031 | orchestrator | 2025-04-01 20:19:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:16.660164 | orchestrator | 2025-04-01 20:19:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:19.713903 | orchestrator | 2025-04-01 20:19:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:19.714076 | orchestrator | 2025-04-01 20:19:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:22.769154 | orchestrator | 2025-04-01 20:19:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:22.769281 | orchestrator | 2025-04-01 20:19:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:25.815540 | orchestrator | 2025-04-01 20:19:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:25.815672 | orchestrator | 2025-04-01 20:19:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:28.853099 | orchestrator | 2025-04-01 20:19:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:28.853268 | orchestrator | 2025-04-01 20:19:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:31.904782 | orchestrator | 2025-04-01 20:19:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:31.904965 | orchestrator | 2025-04-01 20:19:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:34.957290 | orchestrator | 2025-04-01 20:19:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:34.957451 | orchestrator | 2025-04-01 20:19:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:38.000456 | orchestrator | 2025-04-01 20:19:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:38.000586 | orchestrator | 2025-04-01 20:19:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:41.054358 | orchestrator | 2025-04-01 20:19:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:41.054497 | orchestrator | 2025-04-01 20:19:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:44.101574 | orchestrator | 2025-04-01 20:19:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:44.101690 | orchestrator | 2025-04-01 20:19:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:47.156853 | orchestrator | 2025-04-01 20:19:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:47.156978 | orchestrator | 2025-04-01 20:19:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:50.203888 | orchestrator | 2025-04-01 20:19:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:50.204017 | orchestrator | 2025-04-01 20:19:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:53.253026 | orchestrator | 2025-04-01 20:19:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:53.253119 | orchestrator | 2025-04-01 20:19:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:56.309250 | orchestrator | 2025-04-01 20:19:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:56.309395 | orchestrator | 2025-04-01 20:19:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:59.369784 | orchestrator | 2025-04-01 20:19:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:19:59.369916 | orchestrator | 2025-04-01 20:19:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:19:59.370425 | orchestrator | 2025-04-01 20:19:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:02.414358 | orchestrator | 2025-04-01 20:20:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:05.461143 | orchestrator | 2025-04-01 20:20:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:05.461269 | orchestrator | 2025-04-01 20:20:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:08.511601 | orchestrator | 2025-04-01 20:20:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:08.511838 | orchestrator | 2025-04-01 20:20:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:11.559024 | orchestrator | 2025-04-01 20:20:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:11.559196 | orchestrator | 2025-04-01 20:20:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:14.612267 | orchestrator | 2025-04-01 20:20:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:14.612430 | orchestrator | 2025-04-01 20:20:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:17.669881 | orchestrator | 2025-04-01 20:20:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:17.670078 | orchestrator | 2025-04-01 20:20:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:20.718987 | orchestrator | 2025-04-01 20:20:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:20.719156 | orchestrator | 2025-04-01 20:20:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:23.774815 | orchestrator | 2025-04-01 20:20:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:23.774944 | orchestrator | 2025-04-01 20:20:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:26.820335 | orchestrator | 2025-04-01 20:20:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:26.820518 | orchestrator | 2025-04-01 20:20:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:29.872581 | orchestrator | 2025-04-01 20:20:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:29.872790 | orchestrator | 2025-04-01 20:20:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:32.920197 | orchestrator | 2025-04-01 20:20:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:32.920342 | orchestrator | 2025-04-01 20:20:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:35.973562 | orchestrator | 2025-04-01 20:20:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:35.973678 | orchestrator | 2025-04-01 20:20:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:39.029909 | orchestrator | 2025-04-01 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:39.030098 | orchestrator | 2025-04-01 20:20:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:42.080530 | orchestrator | 2025-04-01 20:20:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:42.080659 | orchestrator | 2025-04-01 20:20:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:45.128092 | orchestrator | 2025-04-01 20:20:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:45.128229 | orchestrator | 2025-04-01 20:20:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:48.174249 | orchestrator | 2025-04-01 20:20:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:48.174413 | orchestrator | 2025-04-01 20:20:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:51.230525 | orchestrator | 2025-04-01 20:20:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:51.230669 | orchestrator | 2025-04-01 20:20:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:54.279316 | orchestrator | 2025-04-01 20:20:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:54.279455 | orchestrator | 2025-04-01 20:20:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:20:57.326094 | orchestrator | 2025-04-01 20:20:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:20:57.326227 | orchestrator | 2025-04-01 20:20:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:00.383973 | orchestrator | 2025-04-01 20:20:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:00.384106 | orchestrator | 2025-04-01 20:21:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:03.431930 | orchestrator | 2025-04-01 20:21:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:03.432061 | orchestrator | 2025-04-01 20:21:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:06.473306 | orchestrator | 2025-04-01 20:21:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:06.473473 | orchestrator | 2025-04-01 20:21:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:09.533920 | orchestrator | 2025-04-01 20:21:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:09.534111 | orchestrator | 2025-04-01 20:21:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:12.587300 | orchestrator | 2025-04-01 20:21:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:12.587439 | orchestrator | 2025-04-01 20:21:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:15.642230 | orchestrator | 2025-04-01 20:21:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:15.642396 | orchestrator | 2025-04-01 20:21:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:18.699128 | orchestrator | 2025-04-01 20:21:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:18.699271 | orchestrator | 2025-04-01 20:21:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:21.754806 | orchestrator | 2025-04-01 20:21:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:21.754944 | orchestrator | 2025-04-01 20:21:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:24.807077 | orchestrator | 2025-04-01 20:21:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:24.807214 | orchestrator | 2025-04-01 20:21:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:27.849884 | orchestrator | 2025-04-01 20:21:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:27.850015 | orchestrator | 2025-04-01 20:21:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:30.898412 | orchestrator | 2025-04-01 20:21:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:30.898553 | orchestrator | 2025-04-01 20:21:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:33.940363 | orchestrator | 2025-04-01 20:21:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:33.940507 | orchestrator | 2025-04-01 20:21:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:36.996174 | orchestrator | 2025-04-01 20:21:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:36.996301 | orchestrator | 2025-04-01 20:21:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:40.046777 | orchestrator | 2025-04-01 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:40.046910 | orchestrator | 2025-04-01 20:21:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:43.097846 | orchestrator | 2025-04-01 20:21:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:43.097975 | orchestrator | 2025-04-01 20:21:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:46.152794 | orchestrator | 2025-04-01 20:21:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:46.152929 | orchestrator | 2025-04-01 20:21:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:49.212560 | orchestrator | 2025-04-01 20:21:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:49.212777 | orchestrator | 2025-04-01 20:21:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:52.270475 | orchestrator | 2025-04-01 20:21:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:52.270692 | orchestrator | 2025-04-01 20:21:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:55.320504 | orchestrator | 2025-04-01 20:21:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:55.320704 | orchestrator | 2025-04-01 20:21:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:21:58.383046 | orchestrator | 2025-04-01 20:21:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:21:58.383220 | orchestrator | 2025-04-01 20:21:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:01.428258 | orchestrator | 2025-04-01 20:21:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:01.428428 | orchestrator | 2025-04-01 20:22:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:04.476519 | orchestrator | 2025-04-01 20:22:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:04.476689 | orchestrator | 2025-04-01 20:22:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:07.523794 | orchestrator | 2025-04-01 20:22:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:07.523958 | orchestrator | 2025-04-01 20:22:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:10.577193 | orchestrator | 2025-04-01 20:22:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:10.577355 | orchestrator | 2025-04-01 20:22:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:13.635744 | orchestrator | 2025-04-01 20:22:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:13.635905 | orchestrator | 2025-04-01 20:22:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:16.687227 | orchestrator | 2025-04-01 20:22:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:16.687401 | orchestrator | 2025-04-01 20:22:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:19.739923 | orchestrator | 2025-04-01 20:22:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:19.740083 | orchestrator | 2025-04-01 20:22:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:22.795091 | orchestrator | 2025-04-01 20:22:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:22.795250 | orchestrator | 2025-04-01 20:22:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:25.850920 | orchestrator | 2025-04-01 20:22:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:25.851095 | orchestrator | 2025-04-01 20:22:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:28.898865 | orchestrator | 2025-04-01 20:22:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:28.899033 | orchestrator | 2025-04-01 20:22:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:31.957862 | orchestrator | 2025-04-01 20:22:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:31.958002 | orchestrator | 2025-04-01 20:22:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:35.016891 | orchestrator | 2025-04-01 20:22:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:35.017005 | orchestrator | 2025-04-01 20:22:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:38.064675 | orchestrator | 2025-04-01 20:22:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:38.064866 | orchestrator | 2025-04-01 20:22:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:41.108615 | orchestrator | 2025-04-01 20:22:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:41.108793 | orchestrator | 2025-04-01 20:22:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:44.154226 | orchestrator | 2025-04-01 20:22:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:44.154365 | orchestrator | 2025-04-01 20:22:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:47.210011 | orchestrator | 2025-04-01 20:22:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:47.210196 | orchestrator | 2025-04-01 20:22:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:50.260306 | orchestrator | 2025-04-01 20:22:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:50.260444 | orchestrator | 2025-04-01 20:22:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:53.302168 | orchestrator | 2025-04-01 20:22:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:53.302305 | orchestrator | 2025-04-01 20:22:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:56.359494 | orchestrator | 2025-04-01 20:22:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:56.359584 | orchestrator | 2025-04-01 20:22:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:22:59.407661 | orchestrator | 2025-04-01 20:22:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:22:59.407850 | orchestrator | 2025-04-01 20:22:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:02.456215 | orchestrator | 2025-04-01 20:22:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:02.456356 | orchestrator | 2025-04-01 20:23:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:05.502339 | orchestrator | 2025-04-01 20:23:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:05.502466 | orchestrator | 2025-04-01 20:23:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:08.550964 | orchestrator | 2025-04-01 20:23:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:08.551127 | orchestrator | 2025-04-01 20:23:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:11.603693 | orchestrator | 2025-04-01 20:23:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:11.603865 | orchestrator | 2025-04-01 20:23:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:14.654223 | orchestrator | 2025-04-01 20:23:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:14.654352 | orchestrator | 2025-04-01 20:23:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:17.704544 | orchestrator | 2025-04-01 20:23:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:17.704694 | orchestrator | 2025-04-01 20:23:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:20.757428 | orchestrator | 2025-04-01 20:23:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:20.757567 | orchestrator | 2025-04-01 20:23:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:23.814128 | orchestrator | 2025-04-01 20:23:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:23.814297 | orchestrator | 2025-04-01 20:23:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:26.868564 | orchestrator | 2025-04-01 20:23:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:26.868699 | orchestrator | 2025-04-01 20:23:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:29.913131 | orchestrator | 2025-04-01 20:23:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:29.913267 | orchestrator | 2025-04-01 20:23:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:32.961809 | orchestrator | 2025-04-01 20:23:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:32.961943 | orchestrator | 2025-04-01 20:23:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:36.013358 | orchestrator | 2025-04-01 20:23:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:36.013492 | orchestrator | 2025-04-01 20:23:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:39.059334 | orchestrator | 2025-04-01 20:23:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:39.059460 | orchestrator | 2025-04-01 20:23:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:42.108184 | orchestrator | 2025-04-01 20:23:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:42.108334 | orchestrator | 2025-04-01 20:23:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:45.159602 | orchestrator | 2025-04-01 20:23:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:45.159785 | orchestrator | 2025-04-01 20:23:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:48.202595 | orchestrator | 2025-04-01 20:23:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:48.202787 | orchestrator | 2025-04-01 20:23:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:51.255472 | orchestrator | 2025-04-01 20:23:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:51.255613 | orchestrator | 2025-04-01 20:23:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:54.312621 | orchestrator | 2025-04-01 20:23:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:54.312822 | orchestrator | 2025-04-01 20:23:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:23:57.357532 | orchestrator | 2025-04-01 20:23:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:23:57.357670 | orchestrator | 2025-04-01 20:23:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:00.410558 | orchestrator | 2025-04-01 20:23:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:00.410745 | orchestrator | 2025-04-01 20:24:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:03.476767 | orchestrator | 2025-04-01 20:24:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:03.476911 | orchestrator | 2025-04-01 20:24:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:03.478234 | orchestrator | 2025-04-01 20:24:03 | INFO  | Task 47718772-cb16-43ec-8b59-0eb955b2d18b is in state STARTED 2025-04-01 20:24:06.539060 | orchestrator | 2025-04-01 20:24:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:06.539172 | orchestrator | 2025-04-01 20:24:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:06.541210 | orchestrator | 2025-04-01 20:24:06 | INFO  | Task 47718772-cb16-43ec-8b59-0eb955b2d18b is in state STARTED 2025-04-01 20:24:06.541900 | orchestrator | 2025-04-01 20:24:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:09.611513 | orchestrator | 2025-04-01 20:24:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:09.615288 | orchestrator | 2025-04-01 20:24:09 | INFO  | Task 47718772-cb16-43ec-8b59-0eb955b2d18b is in state STARTED 2025-04-01 20:24:09.615333 | orchestrator | 2025-04-01 20:24:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:12.668864 | orchestrator | 2025-04-01 20:24:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:12.673158 | orchestrator | 2025-04-01 20:24:12 | INFO  | Task 47718772-cb16-43ec-8b59-0eb955b2d18b is in state STARTED 2025-04-01 20:24:15.712910 | orchestrator | 2025-04-01 20:24:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:15.713037 | orchestrator | 2025-04-01 20:24:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:15.713198 | orchestrator | 2025-04-01 20:24:15 | INFO  | Task 47718772-cb16-43ec-8b59-0eb955b2d18b is in state SUCCESS 2025-04-01 20:24:18.766979 | orchestrator | 2025-04-01 20:24:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:18.767128 | orchestrator | 2025-04-01 20:24:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:21.808557 | orchestrator | 2025-04-01 20:24:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:21.808837 | orchestrator | 2025-04-01 20:24:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:24.862004 | orchestrator | 2025-04-01 20:24:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:24.862189 | orchestrator | 2025-04-01 20:24:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:27.909568 | orchestrator | 2025-04-01 20:24:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:27.909765 | orchestrator | 2025-04-01 20:24:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:30.963519 | orchestrator | 2025-04-01 20:24:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:30.963659 | orchestrator | 2025-04-01 20:24:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:34.010962 | orchestrator | 2025-04-01 20:24:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:34.011103 | orchestrator | 2025-04-01 20:24:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:37.067291 | orchestrator | 2025-04-01 20:24:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:37.067431 | orchestrator | 2025-04-01 20:24:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:40.107517 | orchestrator | 2025-04-01 20:24:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:40.107653 | orchestrator | 2025-04-01 20:24:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:43.155078 | orchestrator | 2025-04-01 20:24:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:43.155227 | orchestrator | 2025-04-01 20:24:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:46.201385 | orchestrator | 2025-04-01 20:24:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:46.201549 | orchestrator | 2025-04-01 20:24:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:49.249976 | orchestrator | 2025-04-01 20:24:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:49.250162 | orchestrator | 2025-04-01 20:24:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:52.305568 | orchestrator | 2025-04-01 20:24:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:52.305748 | orchestrator | 2025-04-01 20:24:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:55.351624 | orchestrator | 2025-04-01 20:24:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:55.351810 | orchestrator | 2025-04-01 20:24:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:24:58.398651 | orchestrator | 2025-04-01 20:24:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:24:58.398830 | orchestrator | 2025-04-01 20:24:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:01.446630 | orchestrator | 2025-04-01 20:24:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:01.446815 | orchestrator | 2025-04-01 20:25:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:04.486931 | orchestrator | 2025-04-01 20:25:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:04.487115 | orchestrator | 2025-04-01 20:25:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:07.534204 | orchestrator | 2025-04-01 20:25:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:07.534348 | orchestrator | 2025-04-01 20:25:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:10.588255 | orchestrator | 2025-04-01 20:25:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:10.588376 | orchestrator | 2025-04-01 20:25:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:13.648543 | orchestrator | 2025-04-01 20:25:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:13.649326 | orchestrator | 2025-04-01 20:25:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:16.708074 | orchestrator | 2025-04-01 20:25:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:16.708204 | orchestrator | 2025-04-01 20:25:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:19.761640 | orchestrator | 2025-04-01 20:25:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:19.761770 | orchestrator | 2025-04-01 20:25:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:22.815390 | orchestrator | 2025-04-01 20:25:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:22.815514 | orchestrator | 2025-04-01 20:25:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:25.872850 | orchestrator | 2025-04-01 20:25:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:25.872987 | orchestrator | 2025-04-01 20:25:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:28.919299 | orchestrator | 2025-04-01 20:25:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:28.919456 | orchestrator | 2025-04-01 20:25:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:31.968796 | orchestrator | 2025-04-01 20:25:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:31.968949 | orchestrator | 2025-04-01 20:25:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:35.015150 | orchestrator | 2025-04-01 20:25:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:35.015275 | orchestrator | 2025-04-01 20:25:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:38.063402 | orchestrator | 2025-04-01 20:25:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:38.063537 | orchestrator | 2025-04-01 20:25:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:41.104308 | orchestrator | 2025-04-01 20:25:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:41.104429 | orchestrator | 2025-04-01 20:25:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:44.159481 | orchestrator | 2025-04-01 20:25:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:44.159625 | orchestrator | 2025-04-01 20:25:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:47.206489 | orchestrator | 2025-04-01 20:25:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:47.206622 | orchestrator | 2025-04-01 20:25:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:50.260895 | orchestrator | 2025-04-01 20:25:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:50.261072 | orchestrator | 2025-04-01 20:25:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:53.308834 | orchestrator | 2025-04-01 20:25:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:53.308995 | orchestrator | 2025-04-01 20:25:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:56.363891 | orchestrator | 2025-04-01 20:25:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:56.364073 | orchestrator | 2025-04-01 20:25:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:25:59.414900 | orchestrator | 2025-04-01 20:25:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:25:59.415073 | orchestrator | 2025-04-01 20:25:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:02.471358 | orchestrator | 2025-04-01 20:25:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:02.471536 | orchestrator | 2025-04-01 20:26:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:05.525847 | orchestrator | 2025-04-01 20:26:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:05.526083 | orchestrator | 2025-04-01 20:26:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:08.577864 | orchestrator | 2025-04-01 20:26:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:08.577996 | orchestrator | 2025-04-01 20:26:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:11.627694 | orchestrator | 2025-04-01 20:26:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:11.627828 | orchestrator | 2025-04-01 20:26:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:14.682418 | orchestrator | 2025-04-01 20:26:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:14.682543 | orchestrator | 2025-04-01 20:26:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:17.732374 | orchestrator | 2025-04-01 20:26:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:17.732543 | orchestrator | 2025-04-01 20:26:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:20.779849 | orchestrator | 2025-04-01 20:26:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:20.779976 | orchestrator | 2025-04-01 20:26:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:23.838819 | orchestrator | 2025-04-01 20:26:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:23.838983 | orchestrator | 2025-04-01 20:26:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:26.889209 | orchestrator | 2025-04-01 20:26:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:26.889387 | orchestrator | 2025-04-01 20:26:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:29.945213 | orchestrator | 2025-04-01 20:26:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:29.945358 | orchestrator | 2025-04-01 20:26:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:32.994243 | orchestrator | 2025-04-01 20:26:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:32.994365 | orchestrator | 2025-04-01 20:26:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:36.043865 | orchestrator | 2025-04-01 20:26:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:36.043994 | orchestrator | 2025-04-01 20:26:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:39.095343 | orchestrator | 2025-04-01 20:26:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:39.095466 | orchestrator | 2025-04-01 20:26:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:42.148137 | orchestrator | 2025-04-01 20:26:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:42.148285 | orchestrator | 2025-04-01 20:26:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:45.192695 | orchestrator | 2025-04-01 20:26:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:45.192824 | orchestrator | 2025-04-01 20:26:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:48.243179 | orchestrator | 2025-04-01 20:26:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:48.243298 | orchestrator | 2025-04-01 20:26:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:51.292123 | orchestrator | 2025-04-01 20:26:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:51.292254 | orchestrator | 2025-04-01 20:26:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:54.335547 | orchestrator | 2025-04-01 20:26:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:54.335743 | orchestrator | 2025-04-01 20:26:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:26:57.394391 | orchestrator | 2025-04-01 20:26:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:26:57.394523 | orchestrator | 2025-04-01 20:26:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:00.441894 | orchestrator | 2025-04-01 20:26:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:00.442081 | orchestrator | 2025-04-01 20:27:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:03.486877 | orchestrator | 2025-04-01 20:27:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:03.487045 | orchestrator | 2025-04-01 20:27:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:06.539235 | orchestrator | 2025-04-01 20:27:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:06.539383 | orchestrator | 2025-04-01 20:27:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:09.593819 | orchestrator | 2025-04-01 20:27:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:09.593947 | orchestrator | 2025-04-01 20:27:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:12.645978 | orchestrator | 2025-04-01 20:27:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:12.646795 | orchestrator | 2025-04-01 20:27:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:15.703018 | orchestrator | 2025-04-01 20:27:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:15.703148 | orchestrator | 2025-04-01 20:27:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:18.751269 | orchestrator | 2025-04-01 20:27:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:18.751395 | orchestrator | 2025-04-01 20:27:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:21.797595 | orchestrator | 2025-04-01 20:27:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:21.797783 | orchestrator | 2025-04-01 20:27:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:24.849958 | orchestrator | 2025-04-01 20:27:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:24.850139 | orchestrator | 2025-04-01 20:27:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:27.897527 | orchestrator | 2025-04-01 20:27:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:27.897712 | orchestrator | 2025-04-01 20:27:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:30.944600 | orchestrator | 2025-04-01 20:27:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:30.944792 | orchestrator | 2025-04-01 20:27:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:33.989731 | orchestrator | 2025-04-01 20:27:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:33.989866 | orchestrator | 2025-04-01 20:27:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:37.039402 | orchestrator | 2025-04-01 20:27:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:37.039545 | orchestrator | 2025-04-01 20:27:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:40.095487 | orchestrator | 2025-04-01 20:27:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:40.095669 | orchestrator | 2025-04-01 20:27:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:43.148309 | orchestrator | 2025-04-01 20:27:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:43.148449 | orchestrator | 2025-04-01 20:27:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:46.204829 | orchestrator | 2025-04-01 20:27:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:46.204966 | orchestrator | 2025-04-01 20:27:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:49.261401 | orchestrator | 2025-04-01 20:27:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:49.261567 | orchestrator | 2025-04-01 20:27:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:52.328320 | orchestrator | 2025-04-01 20:27:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:52.328459 | orchestrator | 2025-04-01 20:27:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:55.385640 | orchestrator | 2025-04-01 20:27:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:55.385777 | orchestrator | 2025-04-01 20:27:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:27:58.438010 | orchestrator | 2025-04-01 20:27:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:27:58.438201 | orchestrator | 2025-04-01 20:27:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:01.490762 | orchestrator | 2025-04-01 20:27:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:01.490894 | orchestrator | 2025-04-01 20:28:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:04.544367 | orchestrator | 2025-04-01 20:28:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:04.544503 | orchestrator | 2025-04-01 20:28:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:07.597270 | orchestrator | 2025-04-01 20:28:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:07.597412 | orchestrator | 2025-04-01 20:28:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:10.647876 | orchestrator | 2025-04-01 20:28:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:10.648014 | orchestrator | 2025-04-01 20:28:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:13.704082 | orchestrator | 2025-04-01 20:28:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:13.706906 | orchestrator | 2025-04-01 20:28:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:16.760955 | orchestrator | 2025-04-01 20:28:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:16.761088 | orchestrator | 2025-04-01 20:28:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:19.811161 | orchestrator | 2025-04-01 20:28:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:19.811341 | orchestrator | 2025-04-01 20:28:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:22.866137 | orchestrator | 2025-04-01 20:28:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:22.866269 | orchestrator | 2025-04-01 20:28:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:25.917158 | orchestrator | 2025-04-01 20:28:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:25.917297 | orchestrator | 2025-04-01 20:28:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:28.958780 | orchestrator | 2025-04-01 20:28:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:28.958897 | orchestrator | 2025-04-01 20:28:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:32.015950 | orchestrator | 2025-04-01 20:28:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:32.016082 | orchestrator | 2025-04-01 20:28:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:35.061677 | orchestrator | 2025-04-01 20:28:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:35.061839 | orchestrator | 2025-04-01 20:28:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:38.106000 | orchestrator | 2025-04-01 20:28:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:38.106213 | orchestrator | 2025-04-01 20:28:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:41.159452 | orchestrator | 2025-04-01 20:28:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:41.159538 | orchestrator | 2025-04-01 20:28:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:44.207863 | orchestrator | 2025-04-01 20:28:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:44.208017 | orchestrator | 2025-04-01 20:28:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:47.255455 | orchestrator | 2025-04-01 20:28:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:47.255587 | orchestrator | 2025-04-01 20:28:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:50.293076 | orchestrator | 2025-04-01 20:28:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:50.293216 | orchestrator | 2025-04-01 20:28:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:53.347376 | orchestrator | 2025-04-01 20:28:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:53.347506 | orchestrator | 2025-04-01 20:28:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:56.404879 | orchestrator | 2025-04-01 20:28:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:56.405026 | orchestrator | 2025-04-01 20:28:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:28:59.457127 | orchestrator | 2025-04-01 20:28:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:28:59.457270 | orchestrator | 2025-04-01 20:28:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:02.508463 | orchestrator | 2025-04-01 20:28:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:02.508662 | orchestrator | 2025-04-01 20:29:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:05.560894 | orchestrator | 2025-04-01 20:29:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:05.561069 | orchestrator | 2025-04-01 20:29:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:08.624847 | orchestrator | 2025-04-01 20:29:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:08.625651 | orchestrator | 2025-04-01 20:29:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:11.673478 | orchestrator | 2025-04-01 20:29:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:11.673674 | orchestrator | 2025-04-01 20:29:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:14.725148 | orchestrator | 2025-04-01 20:29:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:14.725304 | orchestrator | 2025-04-01 20:29:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:17.775796 | orchestrator | 2025-04-01 20:29:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:17.775964 | orchestrator | 2025-04-01 20:29:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:20.827228 | orchestrator | 2025-04-01 20:29:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:20.827443 | orchestrator | 2025-04-01 20:29:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:23.875013 | orchestrator | 2025-04-01 20:29:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:23.875182 | orchestrator | 2025-04-01 20:29:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:26.928181 | orchestrator | 2025-04-01 20:29:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:26.928352 | orchestrator | 2025-04-01 20:29:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:29.972359 | orchestrator | 2025-04-01 20:29:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:29.972530 | orchestrator | 2025-04-01 20:29:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:33.020966 | orchestrator | 2025-04-01 20:29:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:33.021143 | orchestrator | 2025-04-01 20:29:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:36.070428 | orchestrator | 2025-04-01 20:29:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:36.070594 | orchestrator | 2025-04-01 20:29:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:39.109811 | orchestrator | 2025-04-01 20:29:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:39.109995 | orchestrator | 2025-04-01 20:29:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:42.146561 | orchestrator | 2025-04-01 20:29:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:42.146784 | orchestrator | 2025-04-01 20:29:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:45.189525 | orchestrator | 2025-04-01 20:29:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:45.189699 | orchestrator | 2025-04-01 20:29:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:48.236478 | orchestrator | 2025-04-01 20:29:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:48.236671 | orchestrator | 2025-04-01 20:29:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:51.291663 | orchestrator | 2025-04-01 20:29:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:51.291818 | orchestrator | 2025-04-01 20:29:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:54.343675 | orchestrator | 2025-04-01 20:29:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:54.343808 | orchestrator | 2025-04-01 20:29:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:29:57.385476 | orchestrator | 2025-04-01 20:29:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:29:57.385592 | orchestrator | 2025-04-01 20:29:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:00.432277 | orchestrator | 2025-04-01 20:29:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:00.432408 | orchestrator | 2025-04-01 20:30:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:03.476447 | orchestrator | 2025-04-01 20:30:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:03.476662 | orchestrator | 2025-04-01 20:30:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:06.523382 | orchestrator | 2025-04-01 20:30:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:06.523537 | orchestrator | 2025-04-01 20:30:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:09.571721 | orchestrator | 2025-04-01 20:30:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:09.571905 | orchestrator | 2025-04-01 20:30:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:12.627272 | orchestrator | 2025-04-01 20:30:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:12.627437 | orchestrator | 2025-04-01 20:30:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:15.674478 | orchestrator | 2025-04-01 20:30:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:15.674683 | orchestrator | 2025-04-01 20:30:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:18.727771 | orchestrator | 2025-04-01 20:30:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:18.727928 | orchestrator | 2025-04-01 20:30:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:21.784745 | orchestrator | 2025-04-01 20:30:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:21.784912 | orchestrator | 2025-04-01 20:30:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:24.838752 | orchestrator | 2025-04-01 20:30:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:24.838912 | orchestrator | 2025-04-01 20:30:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:27.885920 | orchestrator | 2025-04-01 20:30:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:27.886144 | orchestrator | 2025-04-01 20:30:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:30.946157 | orchestrator | 2025-04-01 20:30:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:30.946312 | orchestrator | 2025-04-01 20:30:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:34.000963 | orchestrator | 2025-04-01 20:30:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:34.001114 | orchestrator | 2025-04-01 20:30:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:37.052322 | orchestrator | 2025-04-01 20:30:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:37.052463 | orchestrator | 2025-04-01 20:30:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:40.101153 | orchestrator | 2025-04-01 20:30:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:40.101314 | orchestrator | 2025-04-01 20:30:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:43.146942 | orchestrator | 2025-04-01 20:30:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:43.147068 | orchestrator | 2025-04-01 20:30:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:46.198553 | orchestrator | 2025-04-01 20:30:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:46.198744 | orchestrator | 2025-04-01 20:30:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:49.246394 | orchestrator | 2025-04-01 20:30:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:49.246533 | orchestrator | 2025-04-01 20:30:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:52.293307 | orchestrator | 2025-04-01 20:30:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:52.293474 | orchestrator | 2025-04-01 20:30:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:55.344907 | orchestrator | 2025-04-01 20:30:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:55.345035 | orchestrator | 2025-04-01 20:30:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:30:58.397317 | orchestrator | 2025-04-01 20:30:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:30:58.397451 | orchestrator | 2025-04-01 20:30:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:01.448088 | orchestrator | 2025-04-01 20:30:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:01.448212 | orchestrator | 2025-04-01 20:31:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:04.502352 | orchestrator | 2025-04-01 20:31:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:04.502486 | orchestrator | 2025-04-01 20:31:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:07.559302 | orchestrator | 2025-04-01 20:31:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:07.559427 | orchestrator | 2025-04-01 20:31:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:10.610396 | orchestrator | 2025-04-01 20:31:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:10.610517 | orchestrator | 2025-04-01 20:31:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:13.665150 | orchestrator | 2025-04-01 20:31:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:13.665318 | orchestrator | 2025-04-01 20:31:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:16.716420 | orchestrator | 2025-04-01 20:31:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:16.716581 | orchestrator | 2025-04-01 20:31:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:19.775057 | orchestrator | 2025-04-01 20:31:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:19.775217 | orchestrator | 2025-04-01 20:31:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:22.833411 | orchestrator | 2025-04-01 20:31:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:22.833577 | orchestrator | 2025-04-01 20:31:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:25.895727 | orchestrator | 2025-04-01 20:31:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:25.895888 | orchestrator | 2025-04-01 20:31:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:28.950769 | orchestrator | 2025-04-01 20:31:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:28.950928 | orchestrator | 2025-04-01 20:31:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:32.008400 | orchestrator | 2025-04-01 20:31:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:32.008567 | orchestrator | 2025-04-01 20:31:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:35.055471 | orchestrator | 2025-04-01 20:31:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:35.055672 | orchestrator | 2025-04-01 20:31:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:38.109281 | orchestrator | 2025-04-01 20:31:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:38.109452 | orchestrator | 2025-04-01 20:31:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:41.154315 | orchestrator | 2025-04-01 20:31:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:41.154477 | orchestrator | 2025-04-01 20:31:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:44.193017 | orchestrator | 2025-04-01 20:31:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:44.193167 | orchestrator | 2025-04-01 20:31:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:47.242179 | orchestrator | 2025-04-01 20:31:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:47.242304 | orchestrator | 2025-04-01 20:31:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:50.293281 | orchestrator | 2025-04-01 20:31:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:50.293434 | orchestrator | 2025-04-01 20:31:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:53.351205 | orchestrator | 2025-04-01 20:31:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:53.351336 | orchestrator | 2025-04-01 20:31:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:56.404200 | orchestrator | 2025-04-01 20:31:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:56.404332 | orchestrator | 2025-04-01 20:31:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:31:59.466581 | orchestrator | 2025-04-01 20:31:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:31:59.466764 | orchestrator | 2025-04-01 20:31:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:02.520765 | orchestrator | 2025-04-01 20:31:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:02.520906 | orchestrator | 2025-04-01 20:32:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:05.573965 | orchestrator | 2025-04-01 20:32:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:05.574155 | orchestrator | 2025-04-01 20:32:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:08.628063 | orchestrator | 2025-04-01 20:32:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:08.628839 | orchestrator | 2025-04-01 20:32:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:11.674689 | orchestrator | 2025-04-01 20:32:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:11.674821 | orchestrator | 2025-04-01 20:32:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:14.727858 | orchestrator | 2025-04-01 20:32:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:14.727986 | orchestrator | 2025-04-01 20:32:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:17.777257 | orchestrator | 2025-04-01 20:32:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:17.777381 | orchestrator | 2025-04-01 20:32:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:20.826270 | orchestrator | 2025-04-01 20:32:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:20.826401 | orchestrator | 2025-04-01 20:32:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:23.872364 | orchestrator | 2025-04-01 20:32:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:23.872466 | orchestrator | 2025-04-01 20:32:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:26.921242 | orchestrator | 2025-04-01 20:32:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:26.921373 | orchestrator | 2025-04-01 20:32:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:29.969409 | orchestrator | 2025-04-01 20:32:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:29.969552 | orchestrator | 2025-04-01 20:32:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:33.018891 | orchestrator | 2025-04-01 20:32:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:33.019033 | orchestrator | 2025-04-01 20:32:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:36.067830 | orchestrator | 2025-04-01 20:32:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:36.067972 | orchestrator | 2025-04-01 20:32:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:39.110479 | orchestrator | 2025-04-01 20:32:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:39.110680 | orchestrator | 2025-04-01 20:32:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:42.170543 | orchestrator | 2025-04-01 20:32:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:42.170709 | orchestrator | 2025-04-01 20:32:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:45.215829 | orchestrator | 2025-04-01 20:32:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:45.216007 | orchestrator | 2025-04-01 20:32:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:48.266790 | orchestrator | 2025-04-01 20:32:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:48.266959 | orchestrator | 2025-04-01 20:32:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:51.322755 | orchestrator | 2025-04-01 20:32:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:51.322923 | orchestrator | 2025-04-01 20:32:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:54.372674 | orchestrator | 2025-04-01 20:32:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:54.372846 | orchestrator | 2025-04-01 20:32:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:32:54.374355 | orchestrator | 2025-04-01 20:32:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:32:57.434866 | orchestrator | 2025-04-01 20:32:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:00.489099 | orchestrator | 2025-04-01 20:32:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:00.489239 | orchestrator | 2025-04-01 20:33:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:03.533837 | orchestrator | 2025-04-01 20:33:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:03.533981 | orchestrator | 2025-04-01 20:33:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:06.579255 | orchestrator | 2025-04-01 20:33:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:06.579402 | orchestrator | 2025-04-01 20:33:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:09.638004 | orchestrator | 2025-04-01 20:33:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:09.638182 | orchestrator | 2025-04-01 20:33:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:12.691021 | orchestrator | 2025-04-01 20:33:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:12.691148 | orchestrator | 2025-04-01 20:33:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:15.746569 | orchestrator | 2025-04-01 20:33:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:15.746775 | orchestrator | 2025-04-01 20:33:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:18.793269 | orchestrator | 2025-04-01 20:33:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:18.793393 | orchestrator | 2025-04-01 20:33:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:21.840229 | orchestrator | 2025-04-01 20:33:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:21.840362 | orchestrator | 2025-04-01 20:33:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:24.883149 | orchestrator | 2025-04-01 20:33:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:24.883273 | orchestrator | 2025-04-01 20:33:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:27.939306 | orchestrator | 2025-04-01 20:33:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:27.939435 | orchestrator | 2025-04-01 20:33:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:30.983407 | orchestrator | 2025-04-01 20:33:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:30.983542 | orchestrator | 2025-04-01 20:33:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:34.029554 | orchestrator | 2025-04-01 20:33:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:34.029748 | orchestrator | 2025-04-01 20:33:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:37.077846 | orchestrator | 2025-04-01 20:33:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:37.078009 | orchestrator | 2025-04-01 20:33:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:40.133215 | orchestrator | 2025-04-01 20:33:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:40.133385 | orchestrator | 2025-04-01 20:33:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:43.183117 | orchestrator | 2025-04-01 20:33:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:43.183270 | orchestrator | 2025-04-01 20:33:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:46.232478 | orchestrator | 2025-04-01 20:33:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:46.232696 | orchestrator | 2025-04-01 20:33:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:49.287013 | orchestrator | 2025-04-01 20:33:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:49.287184 | orchestrator | 2025-04-01 20:33:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:52.342110 | orchestrator | 2025-04-01 20:33:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:52.342296 | orchestrator | 2025-04-01 20:33:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:55.394218 | orchestrator | 2025-04-01 20:33:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:55.394365 | orchestrator | 2025-04-01 20:33:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:33:58.450097 | orchestrator | 2025-04-01 20:33:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:33:58.450239 | orchestrator | 2025-04-01 20:33:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:01.493966 | orchestrator | 2025-04-01 20:33:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:01.494142 | orchestrator | 2025-04-01 20:34:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:04.541898 | orchestrator | 2025-04-01 20:34:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:04.542105 | orchestrator | 2025-04-01 20:34:04 | INFO  | Task c4990fd7-f684-4a50-84bd-7c2f5bf0b8d0 is in state STARTED 2025-04-01 20:34:04.543134 | orchestrator | 2025-04-01 20:34:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:07.607037 | orchestrator | 2025-04-01 20:34:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:07.607171 | orchestrator | 2025-04-01 20:34:07 | INFO  | Task c4990fd7-f684-4a50-84bd-7c2f5bf0b8d0 is in state STARTED 2025-04-01 20:34:07.609706 | orchestrator | 2025-04-01 20:34:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:10.662283 | orchestrator | 2025-04-01 20:34:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:10.662456 | orchestrator | 2025-04-01 20:34:10 | INFO  | Task c4990fd7-f684-4a50-84bd-7c2f5bf0b8d0 is in state STARTED 2025-04-01 20:34:10.662815 | orchestrator | 2025-04-01 20:34:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:13.720065 | orchestrator | 2025-04-01 20:34:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:13.720236 | orchestrator | 2025-04-01 20:34:13 | INFO  | Task c4990fd7-f684-4a50-84bd-7c2f5bf0b8d0 is in state SUCCESS 2025-04-01 20:34:13.722820 | orchestrator | 2025-04-01 20:34:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:16.768413 | orchestrator | 2025-04-01 20:34:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:16.768551 | orchestrator | 2025-04-01 20:34:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:19.820742 | orchestrator | 2025-04-01 20:34:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:19.820887 | orchestrator | 2025-04-01 20:34:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:22.871572 | orchestrator | 2025-04-01 20:34:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:22.871751 | orchestrator | 2025-04-01 20:34:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:22.872652 | orchestrator | 2025-04-01 20:34:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:25.931961 | orchestrator | 2025-04-01 20:34:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:28.981422 | orchestrator | 2025-04-01 20:34:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:28.981567 | orchestrator | 2025-04-01 20:34:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:32.017268 | orchestrator | 2025-04-01 20:34:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:32.017387 | orchestrator | 2025-04-01 20:34:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:32.017526 | orchestrator | 2025-04-01 20:34:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:35.060727 | orchestrator | 2025-04-01 20:34:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:38.093496 | orchestrator | 2025-04-01 20:34:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:38.093673 | orchestrator | 2025-04-01 20:34:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:41.150130 | orchestrator | 2025-04-01 20:34:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:41.150315 | orchestrator | 2025-04-01 20:34:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:44.191517 | orchestrator | 2025-04-01 20:34:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:44.191683 | orchestrator | 2025-04-01 20:34:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:47.244525 | orchestrator | 2025-04-01 20:34:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:47.244678 | orchestrator | 2025-04-01 20:34:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:50.291681 | orchestrator | 2025-04-01 20:34:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:50.291859 | orchestrator | 2025-04-01 20:34:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:53.340884 | orchestrator | 2025-04-01 20:34:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:53.341065 | orchestrator | 2025-04-01 20:34:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:56.385249 | orchestrator | 2025-04-01 20:34:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:56.385410 | orchestrator | 2025-04-01 20:34:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:34:59.441007 | orchestrator | 2025-04-01 20:34:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:34:59.441191 | orchestrator | 2025-04-01 20:34:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:02.479335 | orchestrator | 2025-04-01 20:34:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:02.479501 | orchestrator | 2025-04-01 20:35:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:05.518932 | orchestrator | 2025-04-01 20:35:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:05.519067 | orchestrator | 2025-04-01 20:35:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:08.577914 | orchestrator | 2025-04-01 20:35:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:08.578148 | orchestrator | 2025-04-01 20:35:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:11.627542 | orchestrator | 2025-04-01 20:35:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:11.627757 | orchestrator | 2025-04-01 20:35:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:14.678929 | orchestrator | 2025-04-01 20:35:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:14.679114 | orchestrator | 2025-04-01 20:35:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:17.720516 | orchestrator | 2025-04-01 20:35:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:17.720743 | orchestrator | 2025-04-01 20:35:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:20.779426 | orchestrator | 2025-04-01 20:35:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:20.779639 | orchestrator | 2025-04-01 20:35:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:23.832060 | orchestrator | 2025-04-01 20:35:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:23.832237 | orchestrator | 2025-04-01 20:35:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:26.891362 | orchestrator | 2025-04-01 20:35:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:26.891491 | orchestrator | 2025-04-01 20:35:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:29.949484 | orchestrator | 2025-04-01 20:35:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:29.949673 | orchestrator | 2025-04-01 20:35:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:32.994210 | orchestrator | 2025-04-01 20:35:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:32.994314 | orchestrator | 2025-04-01 20:35:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:36.041237 | orchestrator | 2025-04-01 20:35:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:36.041384 | orchestrator | 2025-04-01 20:35:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:39.088002 | orchestrator | 2025-04-01 20:35:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:39.088152 | orchestrator | 2025-04-01 20:35:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:42.143058 | orchestrator | 2025-04-01 20:35:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:42.143205 | orchestrator | 2025-04-01 20:35:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:45.187745 | orchestrator | 2025-04-01 20:35:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:45.187878 | orchestrator | 2025-04-01 20:35:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:48.229310 | orchestrator | 2025-04-01 20:35:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:48.229451 | orchestrator | 2025-04-01 20:35:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:51.286850 | orchestrator | 2025-04-01 20:35:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:51.286990 | orchestrator | 2025-04-01 20:35:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:54.337746 | orchestrator | 2025-04-01 20:35:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:54.337875 | orchestrator | 2025-04-01 20:35:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:35:57.384909 | orchestrator | 2025-04-01 20:35:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:35:57.385031 | orchestrator | 2025-04-01 20:35:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:00.431718 | orchestrator | 2025-04-01 20:35:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:00.431851 | orchestrator | 2025-04-01 20:36:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:03.475636 | orchestrator | 2025-04-01 20:36:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:03.475790 | orchestrator | 2025-04-01 20:36:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:06.529001 | orchestrator | 2025-04-01 20:36:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:06.529137 | orchestrator | 2025-04-01 20:36:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:09.578444 | orchestrator | 2025-04-01 20:36:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:09.578569 | orchestrator | 2025-04-01 20:36:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:12.621168 | orchestrator | 2025-04-01 20:36:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:12.621296 | orchestrator | 2025-04-01 20:36:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:15.669767 | orchestrator | 2025-04-01 20:36:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:15.669904 | orchestrator | 2025-04-01 20:36:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:18.730085 | orchestrator | 2025-04-01 20:36:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:18.730218 | orchestrator | 2025-04-01 20:36:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:21.779245 | orchestrator | 2025-04-01 20:36:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:21.779397 | orchestrator | 2025-04-01 20:36:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:24.824968 | orchestrator | 2025-04-01 20:36:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:24.825103 | orchestrator | 2025-04-01 20:36:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:27.874997 | orchestrator | 2025-04-01 20:36:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:27.875133 | orchestrator | 2025-04-01 20:36:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:30.932180 | orchestrator | 2025-04-01 20:36:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:30.932315 | orchestrator | 2025-04-01 20:36:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:33.976515 | orchestrator | 2025-04-01 20:36:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:33.976661 | orchestrator | 2025-04-01 20:36:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:37.024679 | orchestrator | 2025-04-01 20:36:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:37.024819 | orchestrator | 2025-04-01 20:36:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:40.078126 | orchestrator | 2025-04-01 20:36:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:40.078262 | orchestrator | 2025-04-01 20:36:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:43.126223 | orchestrator | 2025-04-01 20:36:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:43.126365 | orchestrator | 2025-04-01 20:36:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:46.172210 | orchestrator | 2025-04-01 20:36:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:46.172340 | orchestrator | 2025-04-01 20:36:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:49.219244 | orchestrator | 2025-04-01 20:36:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:49.219393 | orchestrator | 2025-04-01 20:36:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:52.276670 | orchestrator | 2025-04-01 20:36:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:52.276808 | orchestrator | 2025-04-01 20:36:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:55.334236 | orchestrator | 2025-04-01 20:36:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:55.334380 | orchestrator | 2025-04-01 20:36:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:36:58.389158 | orchestrator | 2025-04-01 20:36:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:36:58.389275 | orchestrator | 2025-04-01 20:36:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:01.444511 | orchestrator | 2025-04-01 20:36:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:01.444697 | orchestrator | 2025-04-01 20:37:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:04.489729 | orchestrator | 2025-04-01 20:37:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:04.489848 | orchestrator | 2025-04-01 20:37:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:07.542532 | orchestrator | 2025-04-01 20:37:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:07.542700 | orchestrator | 2025-04-01 20:37:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:10.607809 | orchestrator | 2025-04-01 20:37:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:10.607946 | orchestrator | 2025-04-01 20:37:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:13.658075 | orchestrator | 2025-04-01 20:37:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:13.658205 | orchestrator | 2025-04-01 20:37:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:16.709962 | orchestrator | 2025-04-01 20:37:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:16.710184 | orchestrator | 2025-04-01 20:37:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:19.766694 | orchestrator | 2025-04-01 20:37:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:19.766826 | orchestrator | 2025-04-01 20:37:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:22.814065 | orchestrator | 2025-04-01 20:37:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:22.814206 | orchestrator | 2025-04-01 20:37:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:25.870173 | orchestrator | 2025-04-01 20:37:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:25.870313 | orchestrator | 2025-04-01 20:37:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:28.919754 | orchestrator | 2025-04-01 20:37:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:28.919892 | orchestrator | 2025-04-01 20:37:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:31.966878 | orchestrator | 2025-04-01 20:37:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:31.967020 | orchestrator | 2025-04-01 20:37:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:35.010348 | orchestrator | 2025-04-01 20:37:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:35.010518 | orchestrator | 2025-04-01 20:37:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:38.061328 | orchestrator | 2025-04-01 20:37:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:38.061468 | orchestrator | 2025-04-01 20:37:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:41.108705 | orchestrator | 2025-04-01 20:37:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:41.108815 | orchestrator | 2025-04-01 20:37:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:44.150728 | orchestrator | 2025-04-01 20:37:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:44.150880 | orchestrator | 2025-04-01 20:37:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:47.199306 | orchestrator | 2025-04-01 20:37:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:47.199433 | orchestrator | 2025-04-01 20:37:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:50.246718 | orchestrator | 2025-04-01 20:37:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:50.246845 | orchestrator | 2025-04-01 20:37:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:53.300137 | orchestrator | 2025-04-01 20:37:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:53.300271 | orchestrator | 2025-04-01 20:37:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:56.358003 | orchestrator | 2025-04-01 20:37:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:56.358194 | orchestrator | 2025-04-01 20:37:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:37:59.406726 | orchestrator | 2025-04-01 20:37:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:37:59.406895 | orchestrator | 2025-04-01 20:37:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:02.457477 | orchestrator | 2025-04-01 20:37:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:02.457691 | orchestrator | 2025-04-01 20:38:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:05.508700 | orchestrator | 2025-04-01 20:38:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:05.508849 | orchestrator | 2025-04-01 20:38:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:08.558596 | orchestrator | 2025-04-01 20:38:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:08.558743 | orchestrator | 2025-04-01 20:38:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:11.610172 | orchestrator | 2025-04-01 20:38:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:11.610302 | orchestrator | 2025-04-01 20:38:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:14.660702 | orchestrator | 2025-04-01 20:38:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:14.660860 | orchestrator | 2025-04-01 20:38:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:17.707358 | orchestrator | 2025-04-01 20:38:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:17.707490 | orchestrator | 2025-04-01 20:38:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:20.754908 | orchestrator | 2025-04-01 20:38:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:20.755044 | orchestrator | 2025-04-01 20:38:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:23.807632 | orchestrator | 2025-04-01 20:38:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:23.807771 | orchestrator | 2025-04-01 20:38:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:26.860078 | orchestrator | 2025-04-01 20:38:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:26.860210 | orchestrator | 2025-04-01 20:38:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:29.917017 | orchestrator | 2025-04-01 20:38:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:29.917119 | orchestrator | 2025-04-01 20:38:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:32.968500 | orchestrator | 2025-04-01 20:38:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:32.968648 | orchestrator | 2025-04-01 20:38:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:36.019118 | orchestrator | 2025-04-01 20:38:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:36.019256 | orchestrator | 2025-04-01 20:38:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:39.064075 | orchestrator | 2025-04-01 20:38:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:39.064203 | orchestrator | 2025-04-01 20:38:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:42.114478 | orchestrator | 2025-04-01 20:38:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:42.114684 | orchestrator | 2025-04-01 20:38:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:45.160434 | orchestrator | 2025-04-01 20:38:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:45.160564 | orchestrator | 2025-04-01 20:38:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:48.204036 | orchestrator | 2025-04-01 20:38:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:48.204158 | orchestrator | 2025-04-01 20:38:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:51.247026 | orchestrator | 2025-04-01 20:38:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:51.247165 | orchestrator | 2025-04-01 20:38:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:54.292484 | orchestrator | 2025-04-01 20:38:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:54.292671 | orchestrator | 2025-04-01 20:38:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:38:57.349242 | orchestrator | 2025-04-01 20:38:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:38:57.349368 | orchestrator | 2025-04-01 20:38:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:00.404720 | orchestrator | 2025-04-01 20:38:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:00.404824 | orchestrator | 2025-04-01 20:39:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:03.449220 | orchestrator | 2025-04-01 20:39:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:03.449336 | orchestrator | 2025-04-01 20:39:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:06.491054 | orchestrator | 2025-04-01 20:39:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:06.491216 | orchestrator | 2025-04-01 20:39:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:09.542909 | orchestrator | 2025-04-01 20:39:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:09.543023 | orchestrator | 2025-04-01 20:39:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:12.594974 | orchestrator | 2025-04-01 20:39:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:12.595089 | orchestrator | 2025-04-01 20:39:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:15.645169 | orchestrator | 2025-04-01 20:39:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:15.645389 | orchestrator | 2025-04-01 20:39:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:18.701117 | orchestrator | 2025-04-01 20:39:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:18.701237 | orchestrator | 2025-04-01 20:39:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:21.749554 | orchestrator | 2025-04-01 20:39:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:21.749733 | orchestrator | 2025-04-01 20:39:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:24.798011 | orchestrator | 2025-04-01 20:39:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:24.798139 | orchestrator | 2025-04-01 20:39:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:27.844842 | orchestrator | 2025-04-01 20:39:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:27.844976 | orchestrator | 2025-04-01 20:39:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:30.896247 | orchestrator | 2025-04-01 20:39:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:30.896456 | orchestrator | 2025-04-01 20:39:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:33.944922 | orchestrator | 2025-04-01 20:39:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:33.945047 | orchestrator | 2025-04-01 20:39:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:36.985928 | orchestrator | 2025-04-01 20:39:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:36.986102 | orchestrator | 2025-04-01 20:39:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:40.042082 | orchestrator | 2025-04-01 20:39:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:40.042185 | orchestrator | 2025-04-01 20:39:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:43.091001 | orchestrator | 2025-04-01 20:39:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:43.091132 | orchestrator | 2025-04-01 20:39:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:46.137767 | orchestrator | 2025-04-01 20:39:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:46.137904 | orchestrator | 2025-04-01 20:39:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:49.179470 | orchestrator | 2025-04-01 20:39:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:49.179638 | orchestrator | 2025-04-01 20:39:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:52.229864 | orchestrator | 2025-04-01 20:39:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:52.230005 | orchestrator | 2025-04-01 20:39:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:55.277496 | orchestrator | 2025-04-01 20:39:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:55.277638 | orchestrator | 2025-04-01 20:39:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:39:58.337073 | orchestrator | 2025-04-01 20:39:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:39:58.337196 | orchestrator | 2025-04-01 20:39:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:01.390392 | orchestrator | 2025-04-01 20:39:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:01.390517 | orchestrator | 2025-04-01 20:40:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:04.437015 | orchestrator | 2025-04-01 20:40:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:04.437121 | orchestrator | 2025-04-01 20:40:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:07.477686 | orchestrator | 2025-04-01 20:40:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:07.477786 | orchestrator | 2025-04-01 20:40:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:10.519990 | orchestrator | 2025-04-01 20:40:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:10.520077 | orchestrator | 2025-04-01 20:40:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:13.573905 | orchestrator | 2025-04-01 20:40:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:13.573999 | orchestrator | 2025-04-01 20:40:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:16.619848 | orchestrator | 2025-04-01 20:40:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:16.619938 | orchestrator | 2025-04-01 20:40:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:19.671490 | orchestrator | 2025-04-01 20:40:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:19.671681 | orchestrator | 2025-04-01 20:40:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:22.718681 | orchestrator | 2025-04-01 20:40:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:22.718811 | orchestrator | 2025-04-01 20:40:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:25.768175 | orchestrator | 2025-04-01 20:40:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:25.768318 | orchestrator | 2025-04-01 20:40:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:28.810236 | orchestrator | 2025-04-01 20:40:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:28.810406 | orchestrator | 2025-04-01 20:40:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:31.853775 | orchestrator | 2025-04-01 20:40:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:31.853916 | orchestrator | 2025-04-01 20:40:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:34.900727 | orchestrator | 2025-04-01 20:40:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:34.900859 | orchestrator | 2025-04-01 20:40:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:37.946314 | orchestrator | 2025-04-01 20:40:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:37.946483 | orchestrator | 2025-04-01 20:40:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:40.981313 | orchestrator | 2025-04-01 20:40:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:40.981456 | orchestrator | 2025-04-01 20:40:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:44.025274 | orchestrator | 2025-04-01 20:40:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:44.025407 | orchestrator | 2025-04-01 20:40:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:47.072661 | orchestrator | 2025-04-01 20:40:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:47.072795 | orchestrator | 2025-04-01 20:40:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:50.124074 | orchestrator | 2025-04-01 20:40:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:50.124257 | orchestrator | 2025-04-01 20:40:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:50.124407 | orchestrator | 2025-04-01 20:40:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:53.171813 | orchestrator | 2025-04-01 20:40:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:56.222646 | orchestrator | 2025-04-01 20:40:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:56.222781 | orchestrator | 2025-04-01 20:40:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:40:59.270351 | orchestrator | 2025-04-01 20:40:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:40:59.270467 | orchestrator | 2025-04-01 20:40:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:02.316612 | orchestrator | 2025-04-01 20:40:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:02.316732 | orchestrator | 2025-04-01 20:41:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:05.356561 | orchestrator | 2025-04-01 20:41:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:05.356721 | orchestrator | 2025-04-01 20:41:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:08.401527 | orchestrator | 2025-04-01 20:41:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:08.401690 | orchestrator | 2025-04-01 20:41:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:11.446676 | orchestrator | 2025-04-01 20:41:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:11.446815 | orchestrator | 2025-04-01 20:41:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:14.492921 | orchestrator | 2025-04-01 20:41:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:14.493043 | orchestrator | 2025-04-01 20:41:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:17.532939 | orchestrator | 2025-04-01 20:41:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:17.533086 | orchestrator | 2025-04-01 20:41:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:20.575424 | orchestrator | 2025-04-01 20:41:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:20.575560 | orchestrator | 2025-04-01 20:41:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:23.619051 | orchestrator | 2025-04-01 20:41:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:23.619207 | orchestrator | 2025-04-01 20:41:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:26.670207 | orchestrator | 2025-04-01 20:41:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:26.670331 | orchestrator | 2025-04-01 20:41:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:29.719793 | orchestrator | 2025-04-01 20:41:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:29.719970 | orchestrator | 2025-04-01 20:41:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:32.777556 | orchestrator | 2025-04-01 20:41:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:32.777763 | orchestrator | 2025-04-01 20:41:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:35.832420 | orchestrator | 2025-04-01 20:41:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:35.832519 | orchestrator | 2025-04-01 20:41:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:38.885440 | orchestrator | 2025-04-01 20:41:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:38.885532 | orchestrator | 2025-04-01 20:41:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:41.924954 | orchestrator | 2025-04-01 20:41:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:41.925092 | orchestrator | 2025-04-01 20:41:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:44.978173 | orchestrator | 2025-04-01 20:41:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:44.978267 | orchestrator | 2025-04-01 20:41:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:48.033426 | orchestrator | 2025-04-01 20:41:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:48.033592 | orchestrator | 2025-04-01 20:41:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:51.087055 | orchestrator | 2025-04-01 20:41:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:51.087191 | orchestrator | 2025-04-01 20:41:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:54.132889 | orchestrator | 2025-04-01 20:41:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:54.133015 | orchestrator | 2025-04-01 20:41:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:41:57.176616 | orchestrator | 2025-04-01 20:41:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:41:57.176750 | orchestrator | 2025-04-01 20:41:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:00.226109 | orchestrator | 2025-04-01 20:41:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:00.226232 | orchestrator | 2025-04-01 20:42:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:03.271374 | orchestrator | 2025-04-01 20:42:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:03.271514 | orchestrator | 2025-04-01 20:42:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:06.318817 | orchestrator | 2025-04-01 20:42:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:06.318951 | orchestrator | 2025-04-01 20:42:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:09.367668 | orchestrator | 2025-04-01 20:42:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:09.367835 | orchestrator | 2025-04-01 20:42:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:12.422011 | orchestrator | 2025-04-01 20:42:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:12.422185 | orchestrator | 2025-04-01 20:42:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:15.465936 | orchestrator | 2025-04-01 20:42:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:15.466129 | orchestrator | 2025-04-01 20:42:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:18.510355 | orchestrator | 2025-04-01 20:42:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:18.510498 | orchestrator | 2025-04-01 20:42:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:21.548509 | orchestrator | 2025-04-01 20:42:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:21.548668 | orchestrator | 2025-04-01 20:42:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:24.593837 | orchestrator | 2025-04-01 20:42:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:24.593984 | orchestrator | 2025-04-01 20:42:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:27.647465 | orchestrator | 2025-04-01 20:42:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:27.647627 | orchestrator | 2025-04-01 20:42:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:30.693187 | orchestrator | 2025-04-01 20:42:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:30.693339 | orchestrator | 2025-04-01 20:42:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:33.737763 | orchestrator | 2025-04-01 20:42:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:33.737894 | orchestrator | 2025-04-01 20:42:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:36.790129 | orchestrator | 2025-04-01 20:42:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:36.790264 | orchestrator | 2025-04-01 20:42:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:39.842545 | orchestrator | 2025-04-01 20:42:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:39.842725 | orchestrator | 2025-04-01 20:42:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:42.898230 | orchestrator | 2025-04-01 20:42:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:42.898368 | orchestrator | 2025-04-01 20:42:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:45.956929 | orchestrator | 2025-04-01 20:42:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:45.957070 | orchestrator | 2025-04-01 20:42:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:49.012235 | orchestrator | 2025-04-01 20:42:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:49.012375 | orchestrator | 2025-04-01 20:42:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:52.064655 | orchestrator | 2025-04-01 20:42:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:52.064778 | orchestrator | 2025-04-01 20:42:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:55.117368 | orchestrator | 2025-04-01 20:42:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:55.117504 | orchestrator | 2025-04-01 20:42:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:42:55.118527 | orchestrator | 2025-04-01 20:42:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:42:58.160694 | orchestrator | 2025-04-01 20:42:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:01.203247 | orchestrator | 2025-04-01 20:42:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:01.203370 | orchestrator | 2025-04-01 20:43:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:04.250214 | orchestrator | 2025-04-01 20:43:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:04.250386 | orchestrator | 2025-04-01 20:43:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:07.300986 | orchestrator | 2025-04-01 20:43:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:07.301149 | orchestrator | 2025-04-01 20:43:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:10.342710 | orchestrator | 2025-04-01 20:43:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:10.342884 | orchestrator | 2025-04-01 20:43:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:13.384530 | orchestrator | 2025-04-01 20:43:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:13.384715 | orchestrator | 2025-04-01 20:43:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:16.425760 | orchestrator | 2025-04-01 20:43:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:16.425920 | orchestrator | 2025-04-01 20:43:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:19.470402 | orchestrator | 2025-04-01 20:43:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:19.470579 | orchestrator | 2025-04-01 20:43:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:22.522347 | orchestrator | 2025-04-01 20:43:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:22.522502 | orchestrator | 2025-04-01 20:43:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:25.586827 | orchestrator | 2025-04-01 20:43:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:25.586986 | orchestrator | 2025-04-01 20:43:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:28.638430 | orchestrator | 2025-04-01 20:43:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:28.638525 | orchestrator | 2025-04-01 20:43:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:31.683019 | orchestrator | 2025-04-01 20:43:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:31.683152 | orchestrator | 2025-04-01 20:43:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:34.730126 | orchestrator | 2025-04-01 20:43:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:34.730262 | orchestrator | 2025-04-01 20:43:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:37.774386 | orchestrator | 2025-04-01 20:43:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:37.774507 | orchestrator | 2025-04-01 20:43:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:40.820966 | orchestrator | 2025-04-01 20:43:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:40.821094 | orchestrator | 2025-04-01 20:43:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:43.870712 | orchestrator | 2025-04-01 20:43:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:43.870837 | orchestrator | 2025-04-01 20:43:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:46.910148 | orchestrator | 2025-04-01 20:43:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:46.910278 | orchestrator | 2025-04-01 20:43:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:49.952872 | orchestrator | 2025-04-01 20:43:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:49.953003 | orchestrator | 2025-04-01 20:43:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:52.995913 | orchestrator | 2025-04-01 20:43:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:52.996033 | orchestrator | 2025-04-01 20:43:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:56.062178 | orchestrator | 2025-04-01 20:43:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:56.062301 | orchestrator | 2025-04-01 20:43:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:43:59.105952 | orchestrator | 2025-04-01 20:43:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:43:59.106127 | orchestrator | 2025-04-01 20:43:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:02.146286 | orchestrator | 2025-04-01 20:43:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:02.146422 | orchestrator | 2025-04-01 20:44:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:02.147416 | orchestrator | 2025-04-01 20:44:02 | INFO  | Task 8d95aae2-fa70-4430-8f46-93d6d8d23938 is in state STARTED 2025-04-01 20:44:05.198233 | orchestrator | 2025-04-01 20:44:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:05.198366 | orchestrator | 2025-04-01 20:44:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:05.199335 | orchestrator | 2025-04-01 20:44:05 | INFO  | Task 8d95aae2-fa70-4430-8f46-93d6d8d23938 is in state STARTED 2025-04-01 20:44:05.199580 | orchestrator | 2025-04-01 20:44:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:08.262243 | orchestrator | 2025-04-01 20:44:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:08.263294 | orchestrator | 2025-04-01 20:44:08 | INFO  | Task 8d95aae2-fa70-4430-8f46-93d6d8d23938 is in state STARTED 2025-04-01 20:44:08.264088 | orchestrator | 2025-04-01 20:44:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:11.306525 | orchestrator | 2025-04-01 20:44:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:11.307729 | orchestrator | 2025-04-01 20:44:11 | INFO  | Task 8d95aae2-fa70-4430-8f46-93d6d8d23938 is in state STARTED 2025-04-01 20:44:11.308143 | orchestrator | 2025-04-01 20:44:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:14.364793 | orchestrator | 2025-04-01 20:44:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:14.365781 | orchestrator | 2025-04-01 20:44:14 | INFO  | Task 8d95aae2-fa70-4430-8f46-93d6d8d23938 is in state STARTED 2025-04-01 20:44:14.366013 | orchestrator | 2025-04-01 20:44:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:17.413183 | orchestrator | 2025-04-01 20:44:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:17.414835 | orchestrator | 2025-04-01 20:44:17 | INFO  | Task 8d95aae2-fa70-4430-8f46-93d6d8d23938 is in state SUCCESS 2025-04-01 20:44:20.461798 | orchestrator | 2025-04-01 20:44:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:20.461959 | orchestrator | 2025-04-01 20:44:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:23.502215 | orchestrator | 2025-04-01 20:44:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:23.502341 | orchestrator | 2025-04-01 20:44:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:26.547609 | orchestrator | 2025-04-01 20:44:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:26.547718 | orchestrator | 2025-04-01 20:44:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:29.599508 | orchestrator | 2025-04-01 20:44:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:29.599690 | orchestrator | 2025-04-01 20:44:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:32.649604 | orchestrator | 2025-04-01 20:44:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:32.649725 | orchestrator | 2025-04-01 20:44:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:35.700479 | orchestrator | 2025-04-01 20:44:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:35.700670 | orchestrator | 2025-04-01 20:44:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:38.756436 | orchestrator | 2025-04-01 20:44:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:38.756628 | orchestrator | 2025-04-01 20:44:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:41.811244 | orchestrator | 2025-04-01 20:44:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:41.811416 | orchestrator | 2025-04-01 20:44:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:44.856975 | orchestrator | 2025-04-01 20:44:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:44.857185 | orchestrator | 2025-04-01 20:44:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:47.901114 | orchestrator | 2025-04-01 20:44:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:47.901214 | orchestrator | 2025-04-01 20:44:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:50.940549 | orchestrator | 2025-04-01 20:44:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:50.940647 | orchestrator | 2025-04-01 20:44:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:53.990475 | orchestrator | 2025-04-01 20:44:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:53.990664 | orchestrator | 2025-04-01 20:44:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:44:57.036221 | orchestrator | 2025-04-01 20:44:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:44:57.036359 | orchestrator | 2025-04-01 20:44:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:00.072716 | orchestrator | 2025-04-01 20:44:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:00.072824 | orchestrator | 2025-04-01 20:45:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:03.110878 | orchestrator | 2025-04-01 20:45:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:03.110994 | orchestrator | 2025-04-01 20:45:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:06.155688 | orchestrator | 2025-04-01 20:45:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:06.155779 | orchestrator | 2025-04-01 20:45:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:09.200986 | orchestrator | 2025-04-01 20:45:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:09.201093 | orchestrator | 2025-04-01 20:45:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:12.250411 | orchestrator | 2025-04-01 20:45:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:12.250498 | orchestrator | 2025-04-01 20:45:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:15.298240 | orchestrator | 2025-04-01 20:45:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:15.298369 | orchestrator | 2025-04-01 20:45:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:18.346969 | orchestrator | 2025-04-01 20:45:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:18.347103 | orchestrator | 2025-04-01 20:45:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:21.392366 | orchestrator | 2025-04-01 20:45:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:21.392460 | orchestrator | 2025-04-01 20:45:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:24.434102 | orchestrator | 2025-04-01 20:45:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:24.434217 | orchestrator | 2025-04-01 20:45:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:27.473145 | orchestrator | 2025-04-01 20:45:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:27.473258 | orchestrator | 2025-04-01 20:45:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:30.526193 | orchestrator | 2025-04-01 20:45:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:30.526311 | orchestrator | 2025-04-01 20:45:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:33.572942 | orchestrator | 2025-04-01 20:45:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:33.573065 | orchestrator | 2025-04-01 20:45:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:36.618907 | orchestrator | 2025-04-01 20:45:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:36.619044 | orchestrator | 2025-04-01 20:45:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:39.660021 | orchestrator | 2025-04-01 20:45:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:39.660131 | orchestrator | 2025-04-01 20:45:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:42.711562 | orchestrator | 2025-04-01 20:45:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:42.711691 | orchestrator | 2025-04-01 20:45:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:45.765034 | orchestrator | 2025-04-01 20:45:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:45.765160 | orchestrator | 2025-04-01 20:45:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:48.816784 | orchestrator | 2025-04-01 20:45:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:48.816911 | orchestrator | 2025-04-01 20:45:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:51.870098 | orchestrator | 2025-04-01 20:45:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:51.870234 | orchestrator | 2025-04-01 20:45:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:54.921833 | orchestrator | 2025-04-01 20:45:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:54.921943 | orchestrator | 2025-04-01 20:45:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:45:57.969761 | orchestrator | 2025-04-01 20:45:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:45:57.969944 | orchestrator | 2025-04-01 20:45:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:01.017914 | orchestrator | 2025-04-01 20:45:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:01.018094 | orchestrator | 2025-04-01 20:46:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:04.070296 | orchestrator | 2025-04-01 20:46:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:04.070425 | orchestrator | 2025-04-01 20:46:04 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:07.137388 | orchestrator | 2025-04-01 20:46:04 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:07.137505 | orchestrator | 2025-04-01 20:46:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:10.182997 | orchestrator | 2025-04-01 20:46:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:10.183125 | orchestrator | 2025-04-01 20:46:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:13.231578 | orchestrator | 2025-04-01 20:46:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:13.231728 | orchestrator | 2025-04-01 20:46:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:16.276609 | orchestrator | 2025-04-01 20:46:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:16.276809 | orchestrator | 2025-04-01 20:46:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:19.320771 | orchestrator | 2025-04-01 20:46:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:19.320942 | orchestrator | 2025-04-01 20:46:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:22.369869 | orchestrator | 2025-04-01 20:46:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:22.370006 | orchestrator | 2025-04-01 20:46:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:25.417601 | orchestrator | 2025-04-01 20:46:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:25.417720 | orchestrator | 2025-04-01 20:46:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:28.472919 | orchestrator | 2025-04-01 20:46:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:28.473024 | orchestrator | 2025-04-01 20:46:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:31.520059 | orchestrator | 2025-04-01 20:46:28 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:31.520185 | orchestrator | 2025-04-01 20:46:31 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:34.566346 | orchestrator | 2025-04-01 20:46:31 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:34.566467 | orchestrator | 2025-04-01 20:46:34 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:37.613486 | orchestrator | 2025-04-01 20:46:34 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:37.613652 | orchestrator | 2025-04-01 20:46:37 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:40.668149 | orchestrator | 2025-04-01 20:46:37 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:40.668264 | orchestrator | 2025-04-01 20:46:40 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:43.717935 | orchestrator | 2025-04-01 20:46:40 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:43.718080 | orchestrator | 2025-04-01 20:46:43 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:46.772626 | orchestrator | 2025-04-01 20:46:43 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:46.772775 | orchestrator | 2025-04-01 20:46:46 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:49.821857 | orchestrator | 2025-04-01 20:46:46 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:49.821970 | orchestrator | 2025-04-01 20:46:49 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:52.863849 | orchestrator | 2025-04-01 20:46:49 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:52.863965 | orchestrator | 2025-04-01 20:46:52 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:55.906948 | orchestrator | 2025-04-01 20:46:52 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:55.907060 | orchestrator | 2025-04-01 20:46:55 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:46:58.951464 | orchestrator | 2025-04-01 20:46:55 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:46:58.951621 | orchestrator | 2025-04-01 20:46:58 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:02.000321 | orchestrator | 2025-04-01 20:46:58 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:02.000464 | orchestrator | 2025-04-01 20:47:01 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:05.049680 | orchestrator | 2025-04-01 20:47:01 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:05.049798 | orchestrator | 2025-04-01 20:47:05 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:08.097060 | orchestrator | 2025-04-01 20:47:05 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:08.097173 | orchestrator | 2025-04-01 20:47:08 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:11.147486 | orchestrator | 2025-04-01 20:47:08 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:11.147658 | orchestrator | 2025-04-01 20:47:11 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:14.190948 | orchestrator | 2025-04-01 20:47:11 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:14.191068 | orchestrator | 2025-04-01 20:47:14 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:17.235125 | orchestrator | 2025-04-01 20:47:14 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:17.235281 | orchestrator | 2025-04-01 20:47:17 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:20.286653 | orchestrator | 2025-04-01 20:47:17 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:20.286780 | orchestrator | 2025-04-01 20:47:20 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:23.337166 | orchestrator | 2025-04-01 20:47:20 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:23.337306 | orchestrator | 2025-04-01 20:47:23 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:26.389035 | orchestrator | 2025-04-01 20:47:23 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:26.389170 | orchestrator | 2025-04-01 20:47:26 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:29.443661 | orchestrator | 2025-04-01 20:47:26 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:29.443764 | orchestrator | 2025-04-01 20:47:29 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:32.493876 | orchestrator | 2025-04-01 20:47:29 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:32.493988 | orchestrator | 2025-04-01 20:47:32 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:35.536003 | orchestrator | 2025-04-01 20:47:32 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:35.536132 | orchestrator | 2025-04-01 20:47:35 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:38.579731 | orchestrator | 2025-04-01 20:47:35 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:38.579853 | orchestrator | 2025-04-01 20:47:38 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:41.625801 | orchestrator | 2025-04-01 20:47:38 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:41.625914 | orchestrator | 2025-04-01 20:47:41 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:44.681118 | orchestrator | 2025-04-01 20:47:41 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:44.681215 | orchestrator | 2025-04-01 20:47:44 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:47.723703 | orchestrator | 2025-04-01 20:47:44 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:47.723848 | orchestrator | 2025-04-01 20:47:47 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:50.767294 | orchestrator | 2025-04-01 20:47:47 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:50.767376 | orchestrator | 2025-04-01 20:47:50 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:53.819072 | orchestrator | 2025-04-01 20:47:50 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:53.819177 | orchestrator | 2025-04-01 20:47:53 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:56.869652 | orchestrator | 2025-04-01 20:47:53 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:56.869774 | orchestrator | 2025-04-01 20:47:56 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:47:59.917041 | orchestrator | 2025-04-01 20:47:56 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:47:59.917166 | orchestrator | 2025-04-01 20:47:59 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:02.957599 | orchestrator | 2025-04-01 20:47:59 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:02.957724 | orchestrator | 2025-04-01 20:48:02 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:06.007945 | orchestrator | 2025-04-01 20:48:02 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:06.008073 | orchestrator | 2025-04-01 20:48:06 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:09.097309 | orchestrator | 2025-04-01 20:48:06 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:09.097430 | orchestrator | 2025-04-01 20:48:09 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:12.142882 | orchestrator | 2025-04-01 20:48:09 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:12.142984 | orchestrator | 2025-04-01 20:48:12 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:15.193575 | orchestrator | 2025-04-01 20:48:12 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:15.193696 | orchestrator | 2025-04-01 20:48:15 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:18.248376 | orchestrator | 2025-04-01 20:48:15 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:18.248533 | orchestrator | 2025-04-01 20:48:18 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:21.297978 | orchestrator | 2025-04-01 20:48:18 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:21.298197 | orchestrator | 2025-04-01 20:48:21 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:24.341262 | orchestrator | 2025-04-01 20:48:21 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:24.341354 | orchestrator | 2025-04-01 20:48:24 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:27.391868 | orchestrator | 2025-04-01 20:48:24 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:27.391969 | orchestrator | 2025-04-01 20:48:27 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:27.392152 | orchestrator | 2025-04-01 20:48:27 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:30.444215 | orchestrator | 2025-04-01 20:48:30 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:33.500820 | orchestrator | 2025-04-01 20:48:30 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:33.500927 | orchestrator | 2025-04-01 20:48:33 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:36.545636 | orchestrator | 2025-04-01 20:48:33 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:36.545747 | orchestrator | 2025-04-01 20:48:36 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:39.603891 | orchestrator | 2025-04-01 20:48:36 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:39.603980 | orchestrator | 2025-04-01 20:48:39 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:42.646583 | orchestrator | 2025-04-01 20:48:39 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:42.646657 | orchestrator | 2025-04-01 20:48:42 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:45.694843 | orchestrator | 2025-04-01 20:48:42 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:45.694945 | orchestrator | 2025-04-01 20:48:45 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:48.749362 | orchestrator | 2025-04-01 20:48:45 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:48.749551 | orchestrator | 2025-04-01 20:48:48 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:51.803282 | orchestrator | 2025-04-01 20:48:48 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:51.803368 | orchestrator | 2025-04-01 20:48:51 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:54.850236 | orchestrator | 2025-04-01 20:48:51 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:54.850338 | orchestrator | 2025-04-01 20:48:54 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:48:57.895778 | orchestrator | 2025-04-01 20:48:54 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:48:57.895881 | orchestrator | 2025-04-01 20:48:57 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:00.951307 | orchestrator | 2025-04-01 20:48:57 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:00.951378 | orchestrator | 2025-04-01 20:49:00 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:03.998634 | orchestrator | 2025-04-01 20:49:00 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:03.998721 | orchestrator | 2025-04-01 20:49:03 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:07.050368 | orchestrator | 2025-04-01 20:49:03 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:07.050465 | orchestrator | 2025-04-01 20:49:07 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:10.104888 | orchestrator | 2025-04-01 20:49:07 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:10.104983 | orchestrator | 2025-04-01 20:49:10 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:13.140090 | orchestrator | 2025-04-01 20:49:10 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:13.140194 | orchestrator | 2025-04-01 20:49:13 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:16.191612 | orchestrator | 2025-04-01 20:49:13 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:16.191735 | orchestrator | 2025-04-01 20:49:16 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:19.235748 | orchestrator | 2025-04-01 20:49:16 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:19.235849 | orchestrator | 2025-04-01 20:49:19 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:22.277818 | orchestrator | 2025-04-01 20:49:19 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:22.277933 | orchestrator | 2025-04-01 20:49:22 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:25.331582 | orchestrator | 2025-04-01 20:49:22 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:25.331725 | orchestrator | 2025-04-01 20:49:25 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:28.374893 | orchestrator | 2025-04-01 20:49:25 | INFO  | Wait 1 second(s) until the next check 2025-04-01 20:49:28.374997 | orchestrator | 2025-04-01 20:49:28 | INFO  | Task aa2524f4-a625-4b6b-adac-0dc9967e8e8d is in state STARTED 2025-04-01 20:49:30.586744 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-01 20:49:30.589789 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-01 20:49:31.216272 | 2025-04-01 20:49:31.216442 | PLAY [Post output play] 2025-04-01 20:49:31.243653 | 2025-04-01 20:49:31.243783 | LOOP [stage-output : Register sources] 2025-04-01 20:49:31.317080 | 2025-04-01 20:49:31.317311 | TASK [stage-output : Check sudo] 2025-04-01 20:49:31.968729 | orchestrator | sudo: a password is required 2025-04-01 20:49:32.355733 | orchestrator | ok: Runtime: 0:00:00.015067 2025-04-01 20:49:32.363896 | 2025-04-01 20:49:32.363992 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-01 20:49:32.395180 | 2025-04-01 20:49:32.395341 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-01 20:49:32.474509 | orchestrator | ok 2025-04-01 20:49:32.485044 | 2025-04-01 20:49:32.485150 | LOOP [stage-output : Ensure target folders exist] 2025-04-01 20:49:32.929661 | orchestrator | ok: "docs" 2025-04-01 20:49:32.930020 | 2025-04-01 20:49:33.156172 | orchestrator | ok: "artifacts" 2025-04-01 20:49:33.367326 | orchestrator | ok: "logs" 2025-04-01 20:49:33.380329 | 2025-04-01 20:49:33.380472 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-01 20:49:33.401702 | 2025-04-01 20:49:33.401852 | TASK [stage-output : Make all log files readable] 2025-04-01 20:49:33.655435 | orchestrator | ok 2025-04-01 20:49:33.662794 | 2025-04-01 20:49:33.662884 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-01 20:49:33.716909 | orchestrator | skipping: Conditional result was False 2025-04-01 20:49:33.724761 | 2025-04-01 20:49:33.724862 | TASK [stage-output : Discover log files for compression] 2025-04-01 20:49:33.758906 | orchestrator | skipping: Conditional result was False 2025-04-01 20:49:33.771229 | 2025-04-01 20:49:33.771343 | LOOP [stage-output : Archive everything from logs] 2025-04-01 20:49:33.837205 | 2025-04-01 20:49:33.837351 | PLAY [Post cleanup play] 2025-04-01 20:49:33.860601 | 2025-04-01 20:49:33.860711 | TASK [Set cloud fact (Zuul deployment)] 2025-04-01 20:49:33.925662 | orchestrator | ok 2025-04-01 20:49:33.935719 | 2025-04-01 20:49:33.935824 | TASK [Set cloud fact (local deployment)] 2025-04-01 20:49:33.969991 | orchestrator | skipping: Conditional result was False 2025-04-01 20:49:33.979649 | 2025-04-01 20:49:33.979762 | TASK [Clean the cloud environment] 2025-04-01 20:49:34.578063 | orchestrator | 2025-04-01 20:49:34 - clean up servers 2025-04-01 20:49:35.405153 | orchestrator | 2025-04-01 20:49:35 - testbed-manager 2025-04-01 20:49:35.496229 | orchestrator | 2025-04-01 20:49:35 - testbed-node-5 2025-04-01 20:49:35.591727 | orchestrator | 2025-04-01 20:49:35 - testbed-node-3 2025-04-01 20:49:35.678464 | orchestrator | 2025-04-01 20:49:35 - testbed-node-0 2025-04-01 20:49:35.775535 | orchestrator | 2025-04-01 20:49:35 - testbed-node-1 2025-04-01 20:49:35.869372 | orchestrator | 2025-04-01 20:49:35 - testbed-node-4 2025-04-01 20:49:35.965524 | orchestrator | 2025-04-01 20:49:35 - testbed-node-2 2025-04-01 20:49:36.066991 | orchestrator | 2025-04-01 20:49:36 - clean up keypairs 2025-04-01 20:49:36.083550 | orchestrator | 2025-04-01 20:49:36 - testbed 2025-04-01 20:49:36.113804 | orchestrator | 2025-04-01 20:49:36 - wait for servers to be gone 2025-04-01 20:50:00.747516 | orchestrator | 2025-04-01 20:50:00 - clean up ports 2025-04-01 20:50:00.984673 | orchestrator | 2025-04-01 20:50:00 - 3fb2f087-e4fd-4afe-b1e0-baec76efda3a 2025-04-01 20:50:01.170741 | orchestrator | 2025-04-01 20:50:01 - 72c838fc-43e2-4295-97b0-a2652d16c0e9 2025-04-01 20:50:01.357370 | orchestrator | 2025-04-01 20:50:01 - 90d45961-3234-4144-9ff4-b774e295aba5 2025-04-01 20:50:01.680748 | orchestrator | 2025-04-01 20:50:01 - c26adcb7-329c-4f93-babe-13501b7a868a 2025-04-01 20:50:01.875235 | orchestrator | 2025-04-01 20:50:01 - cc9abd11-34b4-48b4-aea7-8af23e85e14e 2025-04-01 20:50:02.104044 | orchestrator | 2025-04-01 20:50:02 - dd62cb0f-7b78-47d7-81b1-c8e1faabd157 2025-04-01 20:50:02.305422 | orchestrator | 2025-04-01 20:50:02 - f324a142-67ca-48f8-80ca-db6f1935fdcf 2025-04-01 20:50:02.490775 | orchestrator | 2025-04-01 20:50:02 - clean up volumes 2025-04-01 20:50:02.635826 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-1-node-base 2025-04-01 20:50:02.671283 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-5-node-base 2025-04-01 20:50:02.710366 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-3-node-base 2025-04-01 20:50:02.748459 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-2-node-base 2025-04-01 20:50:02.785714 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-0-node-base 2025-04-01 20:50:02.823148 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-4-node-base 2025-04-01 20:50:02.863428 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-manager-base 2025-04-01 20:50:02.902632 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-17-node-5 2025-04-01 20:50:02.940306 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-14-node-2 2025-04-01 20:50:02.978409 | orchestrator | 2025-04-01 20:50:02 - testbed-volume-12-node-0 2025-04-01 20:50:03.019761 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-13-node-1 2025-04-01 20:50:03.056352 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-10-node-4 2025-04-01 20:50:03.099843 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-11-node-5 2025-04-01 20:50:03.140496 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-16-node-4 2025-04-01 20:50:03.176998 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-15-node-3 2025-04-01 20:50:03.213699 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-1-node-1 2025-04-01 20:50:03.259255 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-2-node-2 2025-04-01 20:50:03.301451 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-4-node-4 2025-04-01 20:50:03.341657 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-5-node-5 2025-04-01 20:50:03.386658 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-3-node-3 2025-04-01 20:50:03.428068 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-0-node-0 2025-04-01 20:50:03.466065 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-8-node-2 2025-04-01 20:50:03.504961 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-7-node-1 2025-04-01 20:50:03.542952 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-6-node-0 2025-04-01 20:50:03.588910 | orchestrator | 2025-04-01 20:50:03 - testbed-volume-9-node-3 2025-04-01 20:50:03.630788 | orchestrator | 2025-04-01 20:50:03 - disconnect routers 2025-04-01 20:50:03.684666 | orchestrator | 2025-04-01 20:50:03 - testbed 2025-04-01 20:50:04.404353 | orchestrator | 2025-04-01 20:50:04 - clean up subnets 2025-04-01 20:50:04.471646 | orchestrator | 2025-04-01 20:50:04 - subnet-testbed-management 2025-04-01 20:50:04.620087 | orchestrator | 2025-04-01 20:50:04 - clean up networks 2025-04-01 20:50:04.856891 | orchestrator | 2025-04-01 20:50:04 - net-testbed-management 2025-04-01 20:50:05.112031 | orchestrator | 2025-04-01 20:50:05 - clean up security groups 2025-04-01 20:50:05.150378 | orchestrator | 2025-04-01 20:50:05 - testbed-management 2025-04-01 20:50:05.236524 | orchestrator | 2025-04-01 20:50:05 - testbed-node 2025-04-01 20:50:05.316146 | orchestrator | 2025-04-01 20:50:05 - clean up floating ips 2025-04-01 20:50:05.348549 | orchestrator | 2025-04-01 20:50:05 - 81.163.192.82 2025-04-01 20:50:05.886991 | orchestrator | 2025-04-01 20:50:05 - clean up routers 2025-04-01 20:50:05.938539 | orchestrator | 2025-04-01 20:50:05 - testbed 2025-04-01 20:50:07.037101 | orchestrator | changed 2025-04-01 20:50:07.080874 | 2025-04-01 20:50:07.080955 | PLAY RECAP 2025-04-01 20:50:07.081004 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-01 20:50:07.081029 | 2025-04-01 20:50:07.165307 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-01 20:50:07.171745 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-01 20:50:07.822465 | 2025-04-01 20:50:07.822599 | PLAY [Base post-fetch] 2025-04-01 20:50:07.850206 | 2025-04-01 20:50:07.850329 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-01 20:50:07.919211 | orchestrator | skipping: Conditional result was False 2025-04-01 20:50:07.930522 | 2025-04-01 20:50:07.930650 | TASK [fetch-output : Set log path for single node] 2025-04-01 20:50:07.972503 | orchestrator | ok 2025-04-01 20:50:07.979487 | 2025-04-01 20:50:07.979578 | LOOP [fetch-output : Ensure local output dirs] 2025-04-01 20:50:08.389227 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/work/logs" 2025-04-01 20:50:08.630074 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/work/artifacts" 2025-04-01 20:50:08.883745 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0cac353884db48459c0dd2a5bfbcc868/work/docs" 2025-04-01 20:50:08.903712 | 2025-04-01 20:50:08.903857 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-01 20:50:09.685117 | orchestrator | changed: .d..t...... ./ 2025-04-01 20:50:09.685439 | orchestrator | changed: All items complete 2025-04-01 20:50:09.685496 | 2025-04-01 20:50:10.259967 | orchestrator | changed: .d..t...... ./ 2025-04-01 20:50:10.808693 | orchestrator | changed: .d..t...... ./ 2025-04-01 20:50:10.833596 | 2025-04-01 20:50:10.833715 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-01 20:50:10.878818 | orchestrator | skipping: Conditional result was False 2025-04-01 20:50:10.886704 | orchestrator | skipping: Conditional result was False 2025-04-01 20:50:10.937893 | 2025-04-01 20:50:10.938001 | PLAY RECAP 2025-04-01 20:50:10.938062 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-01 20:50:10.938097 | 2025-04-01 20:50:11.054172 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-01 20:50:11.057468 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-01 20:50:11.750900 | 2025-04-01 20:50:11.751051 | PLAY [Base post] 2025-04-01 20:50:11.779431 | 2025-04-01 20:50:11.779563 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-01 20:50:12.522278 | orchestrator | changed 2025-04-01 20:50:12.557130 | 2025-04-01 20:50:12.557277 | PLAY RECAP 2025-04-01 20:50:12.557349 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-01 20:50:12.557436 | 2025-04-01 20:50:12.665123 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-01 20:50:12.668284 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-01 20:50:13.415792 | 2025-04-01 20:50:13.415943 | PLAY [Base post-logs] 2025-04-01 20:50:13.432134 | 2025-04-01 20:50:13.432258 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-01 20:50:13.884954 | localhost | changed 2025-04-01 20:50:13.891806 | 2025-04-01 20:50:13.891998 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-01 20:50:13.926528 | localhost | ok 2025-04-01 20:50:13.937297 | 2025-04-01 20:50:13.937455 | TASK [Set zuul-log-path fact] 2025-04-01 20:50:13.956447 | localhost | ok 2025-04-01 20:50:13.968009 | 2025-04-01 20:50:13.968123 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-01 20:50:14.006535 | localhost | ok 2025-04-01 20:50:14.014434 | 2025-04-01 20:50:14.014559 | TASK [upload-logs : Create log directories] 2025-04-01 20:50:14.518717 | localhost | changed 2025-04-01 20:50:14.526355 | 2025-04-01 20:50:14.526512 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-01 20:50:15.032587 | localhost -> localhost | ok: Runtime: 0:00:00.007110 2025-04-01 20:50:15.044154 | 2025-04-01 20:50:15.044327 | TASK [upload-logs : Upload logs to log server] 2025-04-01 20:50:15.608099 | localhost | Output suppressed because no_log was given 2025-04-01 20:50:15.613788 | 2025-04-01 20:50:15.613951 | LOOP [upload-logs : Compress console log and json output] 2025-04-01 20:50:15.686182 | localhost | skipping: Conditional result was False 2025-04-01 20:50:15.704128 | localhost | skipping: Conditional result was False 2025-04-01 20:50:15.720765 | 2025-04-01 20:50:15.720950 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-01 20:50:15.781669 | localhost | skipping: Conditional result was False 2025-04-01 20:50:15.782223 | 2025-04-01 20:50:15.794291 | localhost | skipping: Conditional result was False 2025-04-01 20:50:15.808305 | 2025-04-01 20:50:15.808548 | LOOP [upload-logs : Upload console log and json output]